go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 08:05:29.066from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 08:03:10.553 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 08:03:10.553 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 08:03:10.553 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 08:03:10.553 Jan 29 08:03:10.553: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 08:03:10.554 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 08:03:10.677 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 08:03:10.758 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 08:03:10.839 (286ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 08:03:10.839 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 08:03:10.839 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 08:03:10.839 Jan 29 08:03:10.933: INFO: Getting bootstrap-e2e-minion-group-ndwb Jan 29 08:03:10.933: INFO: Getting bootstrap-e2e-minion-group-kkkk Jan 29 08:03:10.933: INFO: Getting bootstrap-e2e-minion-group-z5pf Jan 29 08:03:10.974: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-ndwb condition Ready to be true Jan 29 08:03:11.006: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-z5pf condition Ready to be true Jan 29 08:03:11.007: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-kkkk condition Ready to be true Jan 29 08:03:11.016: INFO: Node bootstrap-e2e-minion-group-ndwb has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:03:11.016: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:03:11.016: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-67wn6" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.016: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-ndwb" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.049: INFO: Node bootstrap-e2e-minion-group-z5pf has 4 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-z5pf metadata-proxy-v0.1-7wz67 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt] Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-z5pf metadata-proxy-v0.1-7wz67 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt] Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-sfpjt" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.049: INFO: Node bootstrap-e2e-minion-group-kkkk has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-9b6hn kube-proxy-bootstrap-e2e-minion-group-kkkk] Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-9b6hn kube-proxy-bootstrap-e2e-minion-group-kkkk] Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-kkkk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-9b6hn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-z5pf" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-7wz67" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.059: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ndwb": Phase="Running", Reason="", readiness=true. Elapsed: 42.460792ms Jan 29 08:03:11.059: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ndwb" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.059: INFO: Pod "metadata-proxy-v0.1-67wn6": Phase="Running", Reason="", readiness=true. Elapsed: 42.54793ms Jan 29 08:03:11.059: INFO: Pod "metadata-proxy-v0.1-67wn6" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.059: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:03:11.059: INFO: Getting external IP address for bootstrap-e2e-minion-group-ndwb Jan 29 08:03:11.059: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-ndwb(104.199.118.209:22) Jan 29 08:03:11.093: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 43.969869ms Jan 29 08:03:11.093: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-sfpjt": Phase="Running", Reason="", readiness=true. Elapsed: 44.289388ms Jan 29 08:03:11.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-sfpjt" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.095: INFO: Pod "metadata-proxy-v0.1-7wz67": Phase="Running", Reason="", readiness=true. Elapsed: 45.426836ms Jan 29 08:03:11.095: INFO: Pod "metadata-proxy-v0.1-7wz67" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.095: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=true. Elapsed: 45.555584ms Jan 29 08:03:11.095: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.095: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-z5pf metadata-proxy-v0.1-7wz67 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt] Jan 29 08:03:11.095: INFO: Getting external IP address for bootstrap-e2e-minion-group-z5pf Jan 29 08:03:11.095: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-z5pf(34.83.224.154:22) Jan 29 08:03:11.095: INFO: Pod "metadata-proxy-v0.1-9b6hn": Phase="Running", Reason="", readiness=true. Elapsed: 46.008883ms Jan 29 08:03:11.095: INFO: Pod "metadata-proxy-v0.1-9b6hn" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.095: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kkkk": Phase="Running", Reason="", readiness=true. Elapsed: 46.176259ms Jan 29 08:03:11.095: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kkkk" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.095: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-9b6hn kube-proxy-bootstrap-e2e-minion-group-kkkk] Jan 29 08:03:11.095: INFO: Getting external IP address for bootstrap-e2e-minion-group-kkkk Jan 29 08:03:11.095: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-kkkk(34.168.132.145:22) Jan 29 08:03:11.580: INFO: ssh prow@104.199.118.209:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 08:03:11.580: INFO: ssh prow@104.199.118.209:22: stdout: "" Jan 29 08:03:11.580: INFO: ssh prow@104.199.118.209:22: stderr: "" Jan 29 08:03:11.580: INFO: ssh prow@104.199.118.209:22: exit code: 0 Jan 29 08:03:11.580: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-ndwb condition Ready to be false Jan 29 08:03:11.616: INFO: ssh prow@34.168.132.145:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 08:03:11.616: INFO: ssh prow@34.168.132.145:22: stdout: "" Jan 29 08:03:11.616: INFO: ssh prow@34.168.132.145:22: stderr: "" Jan 29 08:03:11.616: INFO: ssh prow@34.168.132.145:22: exit code: 0 Jan 29 08:03:11.616: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-kkkk condition Ready to be false Jan 29 08:03:11.619: INFO: ssh prow@34.83.224.154:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 08:03:11.619: INFO: ssh prow@34.83.224.154:22: stdout: "" Jan 29 08:03:11.619: INFO: ssh prow@34.83.224.154:22: stderr: "" Jan 29 08:03:11.619: INFO: ssh prow@34.83.224.154:22: exit code: 0 Jan 29 08:03:11.619: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-z5pf condition Ready to be false Jan 29 08:03:11.622: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:11.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:11.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:13.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:13.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:13.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:15.708: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:15.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:15.749: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:17.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:17.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:17.791: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:19.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:19.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:19.835: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:21.848: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:21.872: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:21.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:23.891: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:23.915: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:23.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:25.933: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:25.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:25.961: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:27.975: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:27.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:28.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:30.017: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:30.040: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:30.046: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:32.060: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:32.083: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:32.088: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:34.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:34.126: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:34.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:36.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:36.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:36.172: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:38.188: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:38.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:38.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:40.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:40.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:40.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:42.272: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:42.296: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:42.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:22.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:22.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:22.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:24.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:24.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:24.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:27.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:27.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:27.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:29.053: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:29.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:29.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:31.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:31.099: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:31.099: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:33.139: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:33.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:33.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:35.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:35.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:35.231: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:37.244: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:37.271: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:37.272: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:39.310: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:39.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:39.314: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:41.364: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:41.374: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:41.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:43.407: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:43.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:43.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:45.449: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:45.459: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:45.464: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:47.493: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:47.502: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:47.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:49.536: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:49.546: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:49.550: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:51.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:51.588: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:51.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:53.620: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:53.630: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:53.635: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:55.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:55.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:55.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:57.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:57.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:57.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:59.748: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:59.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:59.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:01.793: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:01.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:01.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:03.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:03.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:03.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:05.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:05.903: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:05.903: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:07.924: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:07.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:07.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:09.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:09.994: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:09.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:11.968: INFO: Node bootstrap-e2e-minion-group-ndwb didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:05:11.995: INFO: Node bootstrap-e2e-minion-group-kkkk didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:05:11.996: INFO: Node bootstrap-e2e-minion-group-z5pf didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:05:11.996: INFO: Node bootstrap-e2e-minion-group-kkkk failed reboot test. Jan 29 08:05:11.996: INFO: Node bootstrap-e2e-minion-group-ndwb failed reboot test. Jan 29 08:05:11.996: INFO: Node bootstrap-e2e-minion-group-z5pf failed reboot test. Jan 29 08:05:11.996: INFO: Executing termination hook on nodes Jan 29 08:05:11.996: INFO: Getting external IP address for bootstrap-e2e-minion-group-kkkk Jan 29 08:05:11.996: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-kkkk(34.168.132.145:22) Jan 29 08:05:28.005: INFO: ssh prow@34.168.132.145:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 08:05:28.005: INFO: ssh prow@34.168.132.145:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 08:03:21 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 08:05:28.005: INFO: ssh prow@34.168.132.145:22: stderr: "" Jan 29 08:05:28.005: INFO: ssh prow@34.168.132.145:22: exit code: 0 Jan 29 08:05:28.005: INFO: Getting external IP address for bootstrap-e2e-minion-group-ndwb Jan 29 08:05:28.005: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-ndwb(104.199.118.209:22) Jan 29 08:05:28.538: INFO: ssh prow@104.199.118.209:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 08:05:28.538: INFO: ssh prow@104.199.118.209:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 08:03:21 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 08:05:28.538: INFO: ssh prow@104.199.118.209:22: stderr: "" Jan 29 08:05:28.538: INFO: ssh prow@104.199.118.209:22: exit code: 0 Jan 29 08:05:28.538: INFO: Getting external IP address for bootstrap-e2e-minion-group-z5pf Jan 29 08:05:28.538: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-z5pf(34.83.224.154:22) Jan 29 08:05:29.066: INFO: ssh prow@34.83.224.154:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 08:05:29.066: INFO: ssh prow@34.83.224.154:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 08:03:21 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 08:05:29.066: INFO: ssh prow@34.83.224.154:22: stderr: "" Jan 29 08:05:29.066: INFO: ssh prow@34.83.224.154:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 08:05:29.066 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 08:05:29.066 (2m18.227s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 08:05:29.066 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 08:05:29.066 Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-mxv6m to bootstrap-e2e-minion-group-ndwb Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.022409416s (1.022418953s including waiting) Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-mxv6m Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Readiness probe failed: Get "http://10.64.3.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Container coredns failed liveness probe, will be restarted Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-xx69z to bootstrap-e2e-minion-group-z5pf Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.332873541s (3.332885491s including waiting) Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.6:8181/ready": dial tcp 10.64.2.6:8181: connect: connection refused Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.9:8181/ready": dial tcp 10.64.2.9:8181: connect: connection refused Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-xx69z_kube-system(25c9d77e-fa01-4def-bbd4-fecdd567d047) Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-xx69z_kube-system(25c9d77e-fa01-4def-bbd4-fecdd567d047) Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.21:8181/ready": dial tcp 10.64.2.21:8181: connect: connection refused Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.24:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-xx69z Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-mxv6m Jan 29 08:05:29.116: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 08:05:29.116: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 08:05:29.116: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 08:05:29.116: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 08:05:29.116: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 08:05:29.116: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 08:05:29.116: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 08:05:29.116: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3abc7 became leader Jan 29 08:05:29.116: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_f409 became leader Jan 29 08:05:29.116: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_38576 became leader Jan 29 08:05:29.116: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3d7b3 became leader Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-5fbzh to bootstrap-e2e-minion-group-kkkk Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 676.046267ms (676.059705ms including waiting) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "http://10.64.1.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-5fbzh_kube-system(9571086c-623c-41c0-955d-d460a6dd0ed2) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "http://10.64.1.10:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-dr7js to bootstrap-e2e-minion-group-z5pf Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.980633764s (1.980644127s including waiting) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-dr7js_kube-system(e1a4e00e-3934-4848-9a66-be9d8c0b101f) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Liveness probe failed: Get "http://10.64.2.25:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rnjhw to bootstrap-e2e-minion-group-ndwb Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 637.095564ms (637.1052ms including waiting) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Failed: Error: failed to get sandbox container task: no running task found: task 4ef63a8d4502cb0295416ca4a4f1b807b6a0f2f7059b915d805f859c9f3445b5 not found: not found Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rnjhw_kube-system(4360ba31-7846-46f7-8c84-29877a07a656) Jan 29 08:05:29.116: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-dr7js Jan 29 08:05:29.116: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-5fbzh Jan 29 08:05:29.116: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rnjhw Jan 29 08:05:29.116: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 08:05:29.116: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 08:05:29.116: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 08:05:29.116: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 08:05:29.116: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 08:05:29.116: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:05:29.116: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 08:05:29.116: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 08:05:29.116: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 08:05:29.116: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c7e426b9-38fc-4c7f-b4fc-f070398d9e0e became leader Jan 29 08:05:29.116: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2b2c293c-76ee-41be-8eb8-f980d4fa01a1 became leader Jan 29 08:05:29.116: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a720fece-9ceb-41c3-8abf-b82f0fc29f13 became leader Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-sfpjt to bootstrap-e2e-minion-group-z5pf Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.278049196s (3.278058964s including waiting) Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container autoscaler Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container autoscaler Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container autoscaler Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container autoscaler Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container autoscaler Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container autoscaler Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-sfpjt_kube-system(19102d18-f113-4479-a30b-b5e1ffe4f405) Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-sfpjt Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ndwb_kube-system(2d3313b36191cd5f359e56c9a4140294) Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ndwb_kube-system(2d3313b36191cd5f359e56c9a4140294) Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-z5pf_kube-system(d25d661a11fddc5eb34e96f57ad37366) Jan 29 08:05:29.116: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 08:05:29.116: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 08:05:29.116: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 08:05:29.116: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 29 08:05:29.116: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 08:05:29.116: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_fc0b0a85-41c4-4dec-ac86-abf3fce22b5a became leader Jan 29 08:05:29.116: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_990125e5-6222-4b04-8d02-6b89ac6a4c2c became leader Jan 29 08:05:29.116: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_210dbef6-31de-436a-bc0b-7ce6daa2453a became leader Jan 29 08:05:29.116: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_813596ae-d86e-4698-ab4f-55e59d099d5a became leader Jan 29 08:05:29.116: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_21a597fd-2489-494d-8ed4-c939ab76f470 became leader Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-dr7rr to bootstrap-e2e-minion-group-z5pf Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.685937785s (1.685947189s including waiting) Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container default-http-backend Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container default-http-backend Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container default-http-backend Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container default-http-backend Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Liveness probe failed: Get "http://10.64.2.16:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-dr7rr Jan 29 08:05:29.116: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 08:05:29.116: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 08:05:29.116: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 08:05:29.116: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 08:05:29.116: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 08:05:29.116: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 08:05:29.116: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-67wn6 to bootstrap-e2e-minion-group-ndwb Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 833.493822ms (833.533685ms including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.083002915s (2.083052486s including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-7wz67 to bootstrap-e2e-minion-group-z5pf Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 755.258946ms (755.278068ms including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.86513205s (1.865157696s including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-9b6hn to bootstrap-e2e-minion-group-kkkk Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 809.552191ms (809.582919ms including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.883268685s (1.88329395s including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-pfnzl to bootstrap-e2e-master Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 704.215502ms (704.236581ms including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.907263274s (1.90727094s including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-pfnzl Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-9b6hn Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-7wz67 Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-67wn6 Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-rtlfm to bootstrap-e2e-minion-group-z5pf Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.91715389s (3.917163412s including waiting) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.044685713s (3.044692875s including waiting) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-rtlfm Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-rtlfm Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-rxlfn to bootstrap-e2e-minion-group-kkkk Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.400139366s (1.400164876s including waiting) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.081461821s (1.081475923s including waiting) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.5:10250/livez": dial tcp 10.64.1.5:10250: connect: connection refused Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-rxlfn_kube-system(8d8a9473-ef41-4d81-bfa8-74398e51df6c) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-rxlfn_kube-system(8d8a9473-ef41-4d81-bfa8-74398e51df6c) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-rxlfn Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-rxlfn Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-z5pf Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.869950781s (3.869960439s including waiting) Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container volume-snapshot-controller Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container volume-snapshot-controller Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container volume-snapshot-controller Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f68e02f2-35da-4ff2-81fa-ed586b7b84bb) Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container volume-snapshot-controller Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container volume-snapshot-controller Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container volume-snapshot-controller Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f68e02f2-35da-4ff2-81fa-ed586b7b84bb) Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 08:05:29.116 (50ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 08:05:29.116 Jan 29 08:05:29.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 08:05:29.159 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 08:05:29.159 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 08:05:29.159 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 08:05:29.159 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 08:05:29.159 STEP: Collecting events from namespace "reboot-5032". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 08:05:29.159 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 08:05:29.2 Jan 29 08:05:29.242: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 08:05:29.242: INFO: Jan 29 08:05:29.310: INFO: Logging node info for node bootstrap-e2e-master Jan 29 08:05:29.352: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master e2d71906-d1d7-40bb-8ec1-0ff5ab8ca7c0 1588 0 2023-01-29 07:56:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 07:56:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 07:56:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 08:02:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:35 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:35 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:35 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:02:35 +0000 UTC,LastTransitionTime:2023-01-29 07:56:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.148.246,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4efc3e501c507bb92c88070968370980,SystemUUID:4efc3e50-1c50-7bb9-2c88-070968370980,BootID:60a7bb4c-1e8b-4a40-b89b-863b85f7960f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:05:29.353: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 08:05:29.399: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 08:05:29.519: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container etcd-container ready: true, restart count 3 Jan 29 08:05:29.519: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container konnectivity-server-container ready: true, restart count 0 Jan 29 08:05:29.519: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 07:55:51 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container l7-lb-controller ready: true, restart count 5 Jan 29 08:05:29.519: INFO: metadata-proxy-v0.1-pfnzl started at 2023-01-29 07:56:38 +0000 UTC (0+2 container statuses recorded) Jan 29 08:05:29.519: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 08:05:29.519: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 08:05:29.519: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container etcd-container ready: true, restart count 0 Jan 29 08:05:29.519: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container kube-apiserver ready: true, restart count 0 Jan 29 08:05:29.519: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 29 08:05:29.519: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container kube-scheduler ready: true, restart count 4 Jan 29 08:05:29.519: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 07:55:51 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container kube-addon-manager ready: true, restart count 3 Jan 29 08:05:29.697: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 08:05:29.697: INFO: Logging node info for node bootstrap-e2e-minion-group-kkkk Jan 29 08:05:29.739: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-kkkk 5c1faf37-6a52-4cb6-984b-794e065a9e18 1833 0 2023-01-29 07:56:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-kkkk kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 08:02:34 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {kube-controller-manager Update v1 2023-01-29 08:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 08:05:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-kkkk,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.132.145,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-kkkk.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-kkkk.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:75ae872aa52dfa1c0bd959ea09034479,SystemUUID:75ae872a-a52d-fa1c-0bd9-59ea09034479,BootID:1bff1b86-47f3-4175-b0a6-8c7f181e8951,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:05:29.739: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-kkkk Jan 29 08:05:29.784: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-kkkk Jan 29 08:05:29.846: INFO: kube-proxy-bootstrap-e2e-minion-group-kkkk started at 2023-01-29 07:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.846: INFO: Container kube-proxy ready: true, restart count 2 Jan 29 08:05:29.846: INFO: metadata-proxy-v0.1-9b6hn started at 2023-01-29 07:56:23 +0000 UTC (0+2 container statuses recorded) Jan 29 08:05:29.846: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 08:05:29.846: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 08:05:29.846: INFO: konnectivity-agent-5fbzh started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.846: INFO: Container konnectivity-agent ready: true, restart count 3 Jan 29 08:05:29.846: INFO: metrics-server-v0.5.2-867b8754b9-rxlfn started at 2023-01-29 07:57:02 +0000 UTC (0+2 container statuses recorded) Jan 29 08:05:29.846: INFO: Container metrics-server ready: false, restart count 7 Jan 29 08:05:29.846: INFO: Container metrics-server-nanny ready: false, restart count 5 Jan 29 08:05:30.005: INFO: Latency metrics for node bootstrap-e2e-minion-group-kkkk Jan 29 08:05:30.005: INFO: Logging node info for node bootstrap-e2e-minion-group-ndwb Jan 29 08:05:30.047: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ndwb a872a196-fba1-4b9d-b495-487aec31cb90 1836 0 2023-01-29 07:56:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ndwb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 08:02:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {kube-controller-manager Update v1 2023-01-29 08:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 08:05:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-ndwb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:104.199.118.209,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ndwb.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ndwb.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2edc831e1759fe886158939202a48af7,SystemUUID:2edc831e-1759-fe88-6158-939202a48af7,BootID:5d0313ec-818e-4f1a-8e5b-80759c2fb042,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:05:30.047: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ndwb Jan 29 08:05:30.092: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ndwb Jan 29 08:05:30.153: INFO: kube-proxy-bootstrap-e2e-minion-group-ndwb started at 2023-01-29 07:56:23 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.153: INFO: Container kube-proxy ready: true, restart count 5 Jan 29 08:05:30.153: INFO: metadata-proxy-v0.1-67wn6 started at 2023-01-29 07:56:24 +0000 UTC (0+2 container statuses recorded) Jan 29 08:05:30.153: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 08:05:30.153: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 08:05:30.153: INFO: konnectivity-agent-rnjhw started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.153: INFO: Container konnectivity-agent ready: true, restart count 1 Jan 29 08:05:30.153: INFO: coredns-6846b5b5f-mxv6m started at 2023-01-29 07:56:45 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.153: INFO: Container coredns ready: true, restart count 1 Jan 29 08:05:30.320: INFO: Latency metrics for node bootstrap-e2e-minion-group-ndwb Jan 29 08:05:30.320: INFO: Logging node info for node bootstrap-e2e-minion-group-z5pf Jan 29 08:05:30.362: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-z5pf f552791c-eaf5-4935-98c3-f2eaec044ac7 1865 0 2023-01-29 07:56:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-z5pf kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-29 08:01:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 08:03:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 08:05:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-z5pf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.224.154,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-z5pf.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-z5pf.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f2a59ebb63baf48a2871acc042960ed,SystemUUID:0f2a59eb-b63b-af48-a287-1acc042960ed,BootID:2324a0d3-719c-4a04-9037-128191cc6d71,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:05:30.362: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-z5pf Jan 29 08:05:30.407: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-z5pf Jan 29 08:05:30.506: INFO: kube-proxy-bootstrap-e2e-minion-group-z5pf started at 2023-01-29 07:56:23 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.506: INFO: Container kube-proxy ready: false, restart count 3 Jan 29 08:05:30.506: INFO: l7-default-backend-8549d69d99-dr7rr started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.506: INFO: Container default-http-backend ready: true, restart count 2 Jan 29 08:05:30.506: INFO: volume-snapshot-controller-0 started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.506: INFO: Container volume-snapshot-controller ready: false, restart count 6 Jan 29 08:05:30.506: INFO: kube-dns-autoscaler-5f6455f985-sfpjt started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.506: INFO: Container autoscaler ready: true, restart count 5 Jan 29 08:05:30.506: INFO: coredns-6846b5b5f-xx69z started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.506: INFO: Container coredns ready: true, restart count 5 Jan 29 08:05:30.506: INFO: metadata-proxy-v0.1-7wz67 started at 2023-01-29 07:56:24 +0000 UTC (0+2 container statuses recorded) Jan 29 08:05:30.506: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 08:05:30.506: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 08:05:30.506: INFO: konnectivity-agent-dr7js started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.506: INFO: Container konnectivity-agent ready: false, restart count 3 Jan 29 08:05:30.671: INFO: Latency metrics for node bootstrap-e2e-minion-group-z5pf END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 08:05:30.671 (1.512s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 08:05:30.671 (1.512s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 08:05:30.671 STEP: Destroying namespace "reboot-5032" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 08:05:30.671 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 08:05:30.717 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 08:05:30.717 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 08:05:30.717 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 08:05:29.066from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 08:03:10.553 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 08:03:10.553 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 08:03:10.553 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 08:03:10.553 Jan 29 08:03:10.553: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 08:03:10.554 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 08:03:10.677 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 08:03:10.758 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 08:03:10.839 (286ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 08:03:10.839 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 08:03:10.839 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 08:03:10.839 Jan 29 08:03:10.933: INFO: Getting bootstrap-e2e-minion-group-ndwb Jan 29 08:03:10.933: INFO: Getting bootstrap-e2e-minion-group-kkkk Jan 29 08:03:10.933: INFO: Getting bootstrap-e2e-minion-group-z5pf Jan 29 08:03:10.974: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-ndwb condition Ready to be true Jan 29 08:03:11.006: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-z5pf condition Ready to be true Jan 29 08:03:11.007: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-kkkk condition Ready to be true Jan 29 08:03:11.016: INFO: Node bootstrap-e2e-minion-group-ndwb has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:03:11.016: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:03:11.016: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-67wn6" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.016: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-ndwb" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.049: INFO: Node bootstrap-e2e-minion-group-z5pf has 4 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-z5pf metadata-proxy-v0.1-7wz67 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt] Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-z5pf metadata-proxy-v0.1-7wz67 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt] Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-sfpjt" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.049: INFO: Node bootstrap-e2e-minion-group-kkkk has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-9b6hn kube-proxy-bootstrap-e2e-minion-group-kkkk] Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-9b6hn kube-proxy-bootstrap-e2e-minion-group-kkkk] Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-kkkk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-9b6hn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-z5pf" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-7wz67" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.049: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:03:11.059: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ndwb": Phase="Running", Reason="", readiness=true. Elapsed: 42.460792ms Jan 29 08:03:11.059: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ndwb" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.059: INFO: Pod "metadata-proxy-v0.1-67wn6": Phase="Running", Reason="", readiness=true. Elapsed: 42.54793ms Jan 29 08:03:11.059: INFO: Pod "metadata-proxy-v0.1-67wn6" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.059: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:03:11.059: INFO: Getting external IP address for bootstrap-e2e-minion-group-ndwb Jan 29 08:03:11.059: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-ndwb(104.199.118.209:22) Jan 29 08:03:11.093: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 43.969869ms Jan 29 08:03:11.093: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-sfpjt": Phase="Running", Reason="", readiness=true. Elapsed: 44.289388ms Jan 29 08:03:11.093: INFO: Pod "kube-dns-autoscaler-5f6455f985-sfpjt" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.095: INFO: Pod "metadata-proxy-v0.1-7wz67": Phase="Running", Reason="", readiness=true. Elapsed: 45.426836ms Jan 29 08:03:11.095: INFO: Pod "metadata-proxy-v0.1-7wz67" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.095: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=true. Elapsed: 45.555584ms Jan 29 08:03:11.095: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.095: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-z5pf metadata-proxy-v0.1-7wz67 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt] Jan 29 08:03:11.095: INFO: Getting external IP address for bootstrap-e2e-minion-group-z5pf Jan 29 08:03:11.095: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-z5pf(34.83.224.154:22) Jan 29 08:03:11.095: INFO: Pod "metadata-proxy-v0.1-9b6hn": Phase="Running", Reason="", readiness=true. Elapsed: 46.008883ms Jan 29 08:03:11.095: INFO: Pod "metadata-proxy-v0.1-9b6hn" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.095: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kkkk": Phase="Running", Reason="", readiness=true. Elapsed: 46.176259ms Jan 29 08:03:11.095: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kkkk" satisfied condition "running and ready, or succeeded" Jan 29 08:03:11.095: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-9b6hn kube-proxy-bootstrap-e2e-minion-group-kkkk] Jan 29 08:03:11.095: INFO: Getting external IP address for bootstrap-e2e-minion-group-kkkk Jan 29 08:03:11.095: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-kkkk(34.168.132.145:22) Jan 29 08:03:11.580: INFO: ssh prow@104.199.118.209:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 08:03:11.580: INFO: ssh prow@104.199.118.209:22: stdout: "" Jan 29 08:03:11.580: INFO: ssh prow@104.199.118.209:22: stderr: "" Jan 29 08:03:11.580: INFO: ssh prow@104.199.118.209:22: exit code: 0 Jan 29 08:03:11.580: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-ndwb condition Ready to be false Jan 29 08:03:11.616: INFO: ssh prow@34.168.132.145:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 08:03:11.616: INFO: ssh prow@34.168.132.145:22: stdout: "" Jan 29 08:03:11.616: INFO: ssh prow@34.168.132.145:22: stderr: "" Jan 29 08:03:11.616: INFO: ssh prow@34.168.132.145:22: exit code: 0 Jan 29 08:03:11.616: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-kkkk condition Ready to be false Jan 29 08:03:11.619: INFO: ssh prow@34.83.224.154:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 08:03:11.619: INFO: ssh prow@34.83.224.154:22: stdout: "" Jan 29 08:03:11.619: INFO: ssh prow@34.83.224.154:22: stderr: "" Jan 29 08:03:11.619: INFO: ssh prow@34.83.224.154:22: exit code: 0 Jan 29 08:03:11.619: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-z5pf condition Ready to be false Jan 29 08:03:11.622: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:11.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:11.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:13.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:13.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:13.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:15.708: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:15.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:15.749: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:17.751: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:17.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:17.791: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:19.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:19.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:19.835: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:21.848: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:21.872: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:21.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:23.891: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:23.915: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:23.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:25.933: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:25.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:25.961: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:27.975: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:27.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:28.003: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:30.017: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:30.040: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:30.046: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:32.060: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:32.083: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:32.088: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:34.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:34.126: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:34.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:36.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:36.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:36.172: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:38.188: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:38.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:38.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:40.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:40.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:40.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:42.272: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:42.296: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:03:42.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:22.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:22.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:22.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:24.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:24.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:24.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:27.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:27.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:27.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:29.053: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:29.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:29.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:31.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:31.099: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:31.099: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:33.139: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:33.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:33.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:35.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:35.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:35.231: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:37.244: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:37.271: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:37.272: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:39.310: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:39.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:39.314: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:41.364: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:41.374: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:41.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:43.407: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:43.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:43.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:45.449: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:45.459: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:45.464: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:47.493: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:47.502: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:47.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:49.536: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:49.546: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:49.550: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:51.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:51.588: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:51.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:53.620: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:53.630: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:53.635: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:55.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:55.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:55.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:57.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:57.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:57.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:59.748: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:59.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:04:59.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:01.793: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:01.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:01.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:03.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:03.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:03.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:05.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:05.903: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:05.903: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:07.924: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:07.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:07.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:09.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:09.994: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:09.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:11.968: INFO: Node bootstrap-e2e-minion-group-ndwb didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:05:11.995: INFO: Node bootstrap-e2e-minion-group-kkkk didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:05:11.996: INFO: Node bootstrap-e2e-minion-group-z5pf didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:05:11.996: INFO: Node bootstrap-e2e-minion-group-kkkk failed reboot test. Jan 29 08:05:11.996: INFO: Node bootstrap-e2e-minion-group-ndwb failed reboot test. Jan 29 08:05:11.996: INFO: Node bootstrap-e2e-minion-group-z5pf failed reboot test. Jan 29 08:05:11.996: INFO: Executing termination hook on nodes Jan 29 08:05:11.996: INFO: Getting external IP address for bootstrap-e2e-minion-group-kkkk Jan 29 08:05:11.996: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-kkkk(34.168.132.145:22) Jan 29 08:05:28.005: INFO: ssh prow@34.168.132.145:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 08:05:28.005: INFO: ssh prow@34.168.132.145:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 08:03:21 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 08:05:28.005: INFO: ssh prow@34.168.132.145:22: stderr: "" Jan 29 08:05:28.005: INFO: ssh prow@34.168.132.145:22: exit code: 0 Jan 29 08:05:28.005: INFO: Getting external IP address for bootstrap-e2e-minion-group-ndwb Jan 29 08:05:28.005: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-ndwb(104.199.118.209:22) Jan 29 08:05:28.538: INFO: ssh prow@104.199.118.209:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 08:05:28.538: INFO: ssh prow@104.199.118.209:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 08:03:21 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 08:05:28.538: INFO: ssh prow@104.199.118.209:22: stderr: "" Jan 29 08:05:28.538: INFO: ssh prow@104.199.118.209:22: exit code: 0 Jan 29 08:05:28.538: INFO: Getting external IP address for bootstrap-e2e-minion-group-z5pf Jan 29 08:05:28.538: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-z5pf(34.83.224.154:22) Jan 29 08:05:29.066: INFO: ssh prow@34.83.224.154:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 08:05:29.066: INFO: ssh prow@34.83.224.154:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 08:03:21 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 08:05:29.066: INFO: ssh prow@34.83.224.154:22: stderr: "" Jan 29 08:05:29.066: INFO: ssh prow@34.83.224.154:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 08:05:29.066 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 08:05:29.066 (2m18.227s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 08:05:29.066 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 08:05:29.066 Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-mxv6m to bootstrap-e2e-minion-group-ndwb Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.022409416s (1.022418953s including waiting) Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-mxv6m Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Readiness probe failed: Get "http://10.64.3.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Container coredns failed liveness probe, will be restarted Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-xx69z to bootstrap-e2e-minion-group-z5pf Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.332873541s (3.332885491s including waiting) Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.6:8181/ready": dial tcp 10.64.2.6:8181: connect: connection refused Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.9:8181/ready": dial tcp 10.64.2.9:8181: connect: connection refused Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-xx69z_kube-system(25c9d77e-fa01-4def-bbd4-fecdd567d047) Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container coredns Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-xx69z_kube-system(25c9d77e-fa01-4def-bbd4-fecdd567d047) Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.21:8181/ready": dial tcp 10.64.2.21:8181: connect: connection refused Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.24:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-xx69z Jan 29 08:05:29.116: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-mxv6m Jan 29 08:05:29.116: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 08:05:29.116: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 08:05:29.116: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 08:05:29.116: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 08:05:29.116: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 08:05:29.116: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 08:05:29.116: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 08:05:29.116: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3abc7 became leader Jan 29 08:05:29.116: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_f409 became leader Jan 29 08:05:29.116: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_38576 became leader Jan 29 08:05:29.116: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3d7b3 became leader Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-5fbzh to bootstrap-e2e-minion-group-kkkk Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 676.046267ms (676.059705ms including waiting) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "http://10.64.1.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-5fbzh_kube-system(9571086c-623c-41c0-955d-d460a6dd0ed2) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "http://10.64.1.10:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-dr7js to bootstrap-e2e-minion-group-z5pf Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.980633764s (1.980644127s including waiting) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-dr7js_kube-system(e1a4e00e-3934-4848-9a66-be9d8c0b101f) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Liveness probe failed: Get "http://10.64.2.25:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rnjhw to bootstrap-e2e-minion-group-ndwb Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 637.095564ms (637.1052ms including waiting) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container konnectivity-agent Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Failed: Error: failed to get sandbox container task: no running task found: task 4ef63a8d4502cb0295416ca4a4f1b807b6a0f2f7059b915d805f859c9f3445b5 not found: not found Jan 29 08:05:29.116: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rnjhw_kube-system(4360ba31-7846-46f7-8c84-29877a07a656) Jan 29 08:05:29.116: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-dr7js Jan 29 08:05:29.116: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-5fbzh Jan 29 08:05:29.116: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rnjhw Jan 29 08:05:29.116: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 08:05:29.116: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 08:05:29.116: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 08:05:29.116: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 08:05:29.116: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 08:05:29.116: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:05:29.116: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 08:05:29.116: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 08:05:29.116: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 08:05:29.116: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c7e426b9-38fc-4c7f-b4fc-f070398d9e0e became leader Jan 29 08:05:29.116: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2b2c293c-76ee-41be-8eb8-f980d4fa01a1 became leader Jan 29 08:05:29.116: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a720fece-9ceb-41c3-8abf-b82f0fc29f13 became leader Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-sfpjt to bootstrap-e2e-minion-group-z5pf Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.278049196s (3.278058964s including waiting) Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container autoscaler Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container autoscaler Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container autoscaler Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container autoscaler Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container autoscaler Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container autoscaler Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-sfpjt_kube-system(19102d18-f113-4479-a30b-b5e1ffe4f405) Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-sfpjt Jan 29 08:05:29.116: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ndwb_kube-system(2d3313b36191cd5f359e56c9a4140294) Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ndwb_kube-system(2d3313b36191cd5f359e56c9a4140294) Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container kube-proxy Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-z5pf_kube-system(d25d661a11fddc5eb34e96f57ad37366) Jan 29 08:05:29.116: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:05:29.116: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 08:05:29.116: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 08:05:29.116: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 08:05:29.116: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 29 08:05:29.116: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 08:05:29.116: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_fc0b0a85-41c4-4dec-ac86-abf3fce22b5a became leader Jan 29 08:05:29.116: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_990125e5-6222-4b04-8d02-6b89ac6a4c2c became leader Jan 29 08:05:29.116: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_210dbef6-31de-436a-bc0b-7ce6daa2453a became leader Jan 29 08:05:29.116: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_813596ae-d86e-4698-ab4f-55e59d099d5a became leader Jan 29 08:05:29.116: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_21a597fd-2489-494d-8ed4-c939ab76f470 became leader Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-dr7rr to bootstrap-e2e-minion-group-z5pf Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.685937785s (1.685947189s including waiting) Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container default-http-backend Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container default-http-backend Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container default-http-backend Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container default-http-backend Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Liveness probe failed: Get "http://10.64.2.16:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 08:05:29.116: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-dr7rr Jan 29 08:05:29.116: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 08:05:29.116: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 08:05:29.116: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 08:05:29.116: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 08:05:29.116: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 08:05:29.116: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 08:05:29.116: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-67wn6 to bootstrap-e2e-minion-group-ndwb Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 833.493822ms (833.533685ms including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.083002915s (2.083052486s including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-7wz67 to bootstrap-e2e-minion-group-z5pf Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 755.258946ms (755.278068ms including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.86513205s (1.865157696s including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-9b6hn to bootstrap-e2e-minion-group-kkkk Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 809.552191ms (809.582919ms including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.883268685s (1.88329395s including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-pfnzl to bootstrap-e2e-master Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 704.215502ms (704.236581ms including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.907263274s (1.90727094s including waiting) Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-pfnzl Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-9b6hn Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-7wz67 Jan 29 08:05:29.116: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-67wn6 Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-rtlfm to bootstrap-e2e-minion-group-z5pf Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.91715389s (3.917163412s including waiting) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.044685713s (3.044692875s including waiting) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-rtlfm Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-rtlfm Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-rxlfn to bootstrap-e2e-minion-group-kkkk Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.400139366s (1.400164876s including waiting) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.081461821s (1.081475923s including waiting) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server-nanny Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.5:10250/livez": dial tcp 10.64.1.5:10250: connect: connection refused Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-rxlfn_kube-system(8d8a9473-ef41-4d81-bfa8-74398e51df6c) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-rxlfn_kube-system(8d8a9473-ef41-4d81-bfa8-74398e51df6c) Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-rxlfn Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-rxlfn Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 08:05:29.116: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-z5pf Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.869950781s (3.869960439s including waiting) Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container volume-snapshot-controller Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container volume-snapshot-controller Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container volume-snapshot-controller Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f68e02f2-35da-4ff2-81fa-ed586b7b84bb) Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container volume-snapshot-controller Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container volume-snapshot-controller Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container volume-snapshot-controller Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f68e02f2-35da-4ff2-81fa-ed586b7b84bb) Jan 29 08:05:29.116: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 08:05:29.116 (50ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 08:05:29.116 Jan 29 08:05:29.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 08:05:29.159 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 08:05:29.159 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 08:05:29.159 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 08:05:29.159 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 08:05:29.159 STEP: Collecting events from namespace "reboot-5032". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 08:05:29.159 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 08:05:29.2 Jan 29 08:05:29.242: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 08:05:29.242: INFO: Jan 29 08:05:29.310: INFO: Logging node info for node bootstrap-e2e-master Jan 29 08:05:29.352: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master e2d71906-d1d7-40bb-8ec1-0ff5ab8ca7c0 1588 0 2023-01-29 07:56:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 07:56:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 07:56:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 08:02:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:35 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:35 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:35 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:02:35 +0000 UTC,LastTransitionTime:2023-01-29 07:56:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.148.246,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4efc3e501c507bb92c88070968370980,SystemUUID:4efc3e50-1c50-7bb9-2c88-070968370980,BootID:60a7bb4c-1e8b-4a40-b89b-863b85f7960f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:05:29.353: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 08:05:29.399: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 08:05:29.519: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container etcd-container ready: true, restart count 3 Jan 29 08:05:29.519: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container konnectivity-server-container ready: true, restart count 0 Jan 29 08:05:29.519: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 07:55:51 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container l7-lb-controller ready: true, restart count 5 Jan 29 08:05:29.519: INFO: metadata-proxy-v0.1-pfnzl started at 2023-01-29 07:56:38 +0000 UTC (0+2 container statuses recorded) Jan 29 08:05:29.519: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 08:05:29.519: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 08:05:29.519: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container etcd-container ready: true, restart count 0 Jan 29 08:05:29.519: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container kube-apiserver ready: true, restart count 0 Jan 29 08:05:29.519: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 29 08:05:29.519: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container kube-scheduler ready: true, restart count 4 Jan 29 08:05:29.519: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 07:55:51 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.519: INFO: Container kube-addon-manager ready: true, restart count 3 Jan 29 08:05:29.697: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 08:05:29.697: INFO: Logging node info for node bootstrap-e2e-minion-group-kkkk Jan 29 08:05:29.739: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-kkkk 5c1faf37-6a52-4cb6-984b-794e065a9e18 1833 0 2023-01-29 07:56:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-kkkk kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 08:02:34 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {kube-controller-manager Update v1 2023-01-29 08:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 08:05:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-kkkk,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.132.145,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-kkkk.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-kkkk.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:75ae872aa52dfa1c0bd959ea09034479,SystemUUID:75ae872a-a52d-fa1c-0bd9-59ea09034479,BootID:1bff1b86-47f3-4175-b0a6-8c7f181e8951,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:05:29.739: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-kkkk Jan 29 08:05:29.784: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-kkkk Jan 29 08:05:29.846: INFO: kube-proxy-bootstrap-e2e-minion-group-kkkk started at 2023-01-29 07:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.846: INFO: Container kube-proxy ready: true, restart count 2 Jan 29 08:05:29.846: INFO: metadata-proxy-v0.1-9b6hn started at 2023-01-29 07:56:23 +0000 UTC (0+2 container statuses recorded) Jan 29 08:05:29.846: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 08:05:29.846: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 08:05:29.846: INFO: konnectivity-agent-5fbzh started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:29.846: INFO: Container konnectivity-agent ready: true, restart count 3 Jan 29 08:05:29.846: INFO: metrics-server-v0.5.2-867b8754b9-rxlfn started at 2023-01-29 07:57:02 +0000 UTC (0+2 container statuses recorded) Jan 29 08:05:29.846: INFO: Container metrics-server ready: false, restart count 7 Jan 29 08:05:29.846: INFO: Container metrics-server-nanny ready: false, restart count 5 Jan 29 08:05:30.005: INFO: Latency metrics for node bootstrap-e2e-minion-group-kkkk Jan 29 08:05:30.005: INFO: Logging node info for node bootstrap-e2e-minion-group-ndwb Jan 29 08:05:30.047: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ndwb a872a196-fba1-4b9d-b495-487aec31cb90 1836 0 2023-01-29 07:56:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ndwb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 08:02:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {kube-controller-manager Update v1 2023-01-29 08:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 08:05:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-ndwb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:02:34 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:104.199.118.209,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ndwb.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ndwb.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2edc831e1759fe886158939202a48af7,SystemUUID:2edc831e-1759-fe88-6158-939202a48af7,BootID:5d0313ec-818e-4f1a-8e5b-80759c2fb042,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:05:30.047: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ndwb Jan 29 08:05:30.092: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ndwb Jan 29 08:05:30.153: INFO: kube-proxy-bootstrap-e2e-minion-group-ndwb started at 2023-01-29 07:56:23 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.153: INFO: Container kube-proxy ready: true, restart count 5 Jan 29 08:05:30.153: INFO: metadata-proxy-v0.1-67wn6 started at 2023-01-29 07:56:24 +0000 UTC (0+2 container statuses recorded) Jan 29 08:05:30.153: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 08:05:30.153: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 08:05:30.153: INFO: konnectivity-agent-rnjhw started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.153: INFO: Container konnectivity-agent ready: true, restart count 1 Jan 29 08:05:30.153: INFO: coredns-6846b5b5f-mxv6m started at 2023-01-29 07:56:45 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.153: INFO: Container coredns ready: true, restart count 1 Jan 29 08:05:30.320: INFO: Latency metrics for node bootstrap-e2e-minion-group-ndwb Jan 29 08:05:30.320: INFO: Logging node info for node bootstrap-e2e-minion-group-z5pf Jan 29 08:05:30.362: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-z5pf f552791c-eaf5-4935-98c3-f2eaec044ac7 1865 0 2023-01-29 07:56:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-z5pf kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-29 08:01:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 08:03:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 08:05:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-z5pf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.224.154,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-z5pf.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-z5pf.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f2a59ebb63baf48a2871acc042960ed,SystemUUID:0f2a59eb-b63b-af48-a287-1acc042960ed,BootID:2324a0d3-719c-4a04-9037-128191cc6d71,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:05:30.362: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-z5pf Jan 29 08:05:30.407: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-z5pf Jan 29 08:05:30.506: INFO: kube-proxy-bootstrap-e2e-minion-group-z5pf started at 2023-01-29 07:56:23 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.506: INFO: Container kube-proxy ready: false, restart count 3 Jan 29 08:05:30.506: INFO: l7-default-backend-8549d69d99-dr7rr started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.506: INFO: Container default-http-backend ready: true, restart count 2 Jan 29 08:05:30.506: INFO: volume-snapshot-controller-0 started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.506: INFO: Container volume-snapshot-controller ready: false, restart count 6 Jan 29 08:05:30.506: INFO: kube-dns-autoscaler-5f6455f985-sfpjt started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.506: INFO: Container autoscaler ready: true, restart count 5 Jan 29 08:05:30.506: INFO: coredns-6846b5b5f-xx69z started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.506: INFO: Container coredns ready: true, restart count 5 Jan 29 08:05:30.506: INFO: metadata-proxy-v0.1-7wz67 started at 2023-01-29 07:56:24 +0000 UTC (0+2 container statuses recorded) Jan 29 08:05:30.506: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 08:05:30.506: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 08:05:30.506: INFO: konnectivity-agent-dr7js started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:05:30.506: INFO: Container konnectivity-agent ready: false, restart count 3 Jan 29 08:05:30.671: INFO: Latency metrics for node bootstrap-e2e-minion-group-z5pf END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 08:05:30.671 (1.512s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 08:05:30.671 (1.512s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 08:05:30.671 STEP: Destroying namespace "reboot-5032" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 08:05:30.671 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 08:05:30.717 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 08:05:30.717 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 08:05:30.717 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 08:08:36.474from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 08:05:30.789 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 08:05:30.789 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 08:05:30.789 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 08:05:30.789 Jan 29 08:05:30.789: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 08:05:30.79 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 08:05:30.917 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 08:05:30.997 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 08:05:31.078 (289ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 08:05:31.078 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 08:05:31.078 (0s) > Enter [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 08:05:31.078 Jan 29 08:05:31.173: INFO: Getting bootstrap-e2e-minion-group-kkkk Jan 29 08:05:31.173: INFO: Getting bootstrap-e2e-minion-group-z5pf Jan 29 08:05:31.218: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-z5pf condition Ready to be true Jan 29 08:05:31.218: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-kkkk condition Ready to be true Jan 29 08:05:31.223: INFO: Getting bootstrap-e2e-minion-group-ndwb Jan 29 08:05:31.260: INFO: Node bootstrap-e2e-minion-group-kkkk has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-kkkk metadata-proxy-v0.1-9b6hn] Jan 29 08:05:31.260: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-kkkk metadata-proxy-v0.1-9b6hn] Jan 29 08:05:31.260: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-9b6hn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.260: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-kkkk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.261: INFO: Node bootstrap-e2e-minion-group-z5pf has 4 assigned pods with no liveness probes: [metadata-proxy-v0.1-7wz67 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt kube-proxy-bootstrap-e2e-minion-group-z5pf] Jan 29 08:05:31.261: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-7wz67 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt kube-proxy-bootstrap-e2e-minion-group-z5pf] Jan 29 08:05:31.261: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-z5pf" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.261: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.261: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-sfpjt" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.261: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-7wz67" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.265: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-ndwb condition Ready to be true Jan 29 08:05:31.306: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kkkk": Phase="Running", Reason="", readiness=true. Elapsed: 45.133147ms Jan 29 08:05:31.306: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kkkk" satisfied condition "running and ready, or succeeded" Jan 29 08:05:31.306: INFO: Pod "kube-dns-autoscaler-5f6455f985-sfpjt": Phase="Running", Reason="", readiness=true. Elapsed: 45.26185ms Jan 29 08:05:31.306: INFO: Pod "kube-dns-autoscaler-5f6455f985-sfpjt" satisfied condition "running and ready, or succeeded" Jan 29 08:05:31.307: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.654821ms Jan 29 08:05:31.307: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:05:31.308: INFO: Pod "metadata-proxy-v0.1-7wz67": Phase="Running", Reason="", readiness=true. Elapsed: 46.829954ms Jan 29 08:05:31.308: INFO: Pod "metadata-proxy-v0.1-7wz67" satisfied condition "running and ready, or succeeded" Jan 29 08:05:31.308: INFO: Pod "metadata-proxy-v0.1-9b6hn": Phase="Running", Reason="", readiness=true. Elapsed: 47.40828ms Jan 29 08:05:31.308: INFO: Pod "metadata-proxy-v0.1-9b6hn" satisfied condition "running and ready, or succeeded" Jan 29 08:05:31.308: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-kkkk metadata-proxy-v0.1-9b6hn] Jan 29 08:05:31.308: INFO: Getting external IP address for bootstrap-e2e-minion-group-kkkk Jan 29 08:05:31.308: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-kkkk(34.168.132.145:22) Jan 29 08:05:31.308: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=false. Elapsed: 47.723076ms Jan 29 08:05:31.308: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-z5pf' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC }] Jan 29 08:05:31.309: INFO: Node bootstrap-e2e-minion-group-ndwb has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:05:31.309: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:05:31.309: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-67wn6" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.309: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-ndwb" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.351: INFO: Pod "metadata-proxy-v0.1-67wn6": Phase="Running", Reason="", readiness=true. Elapsed: 42.350087ms Jan 29 08:05:31.351: INFO: Pod "metadata-proxy-v0.1-67wn6" satisfied condition "running and ready, or succeeded" Jan 29 08:05:31.351: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ndwb": Phase="Running", Reason="", readiness=true. Elapsed: 42.210919ms Jan 29 08:05:31.351: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ndwb" satisfied condition "running and ready, or succeeded" Jan 29 08:05:31.351: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:05:31.351: INFO: Getting external IP address for bootstrap-e2e-minion-group-ndwb Jan 29 08:05:31.351: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-ndwb(104.199.118.209:22) Jan 29 08:05:31.832: INFO: ssh prow@34.168.132.145:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 08:05:31.832: INFO: ssh prow@34.168.132.145:22: stdout: "" Jan 29 08:05:31.832: INFO: ssh prow@34.168.132.145:22: stderr: "" Jan 29 08:05:31.832: INFO: ssh prow@34.168.132.145:22: exit code: 0 Jan 29 08:05:31.832: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-kkkk condition Ready to be false Jan 29 08:05:31.872: INFO: ssh prow@104.199.118.209:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 08:05:31.872: INFO: ssh prow@104.199.118.209:22: stdout: "" Jan 29 08:05:31.872: INFO: ssh prow@104.199.118.209:22: stderr: "" Jan 29 08:05:31.872: INFO: ssh prow@104.199.118.209:22: exit code: 0 Jan 29 08:05:31.872: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-ndwb condition Ready to be false Jan 29 08:05:31.874: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:31.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:33.350: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.089166502s Jan 29 08:05:33.350: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:05:33.351: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=false. Elapsed: 2.090805957s Jan 29 08:05:33.351: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-z5pf' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC }] Jan 29 08:05:33.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:33.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:35.349: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.088520681s Jan 29 08:05:35.349: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:05:35.352: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=false. Elapsed: 4.091056548s Jan 29 08:05:35.352: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-z5pf' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC }] Jan 29 08:05:35.983: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:35.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:37.349: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.088260209s Jan 29 08:05:37.349: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:05:37.351: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=false. Elapsed: 6.090632557s Jan 29 08:05:37.351: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-z5pf' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC }] Jan 29 08:06:24.812: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 53.551147182s Jan 29 08:06:24.812: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:06:24.812: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=false. Elapsed: 53.55149063s Jan 29 08:06:24.812: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-z5pf' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC }] Jan 29 08:06:25.350: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 54.089177362s Jan 29 08:06:25.350: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 08:06:25.353: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=true. Elapsed: 54.091887156s Jan 29 08:06:25.353: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf" satisfied condition "running and ready, or succeeded" Jan 29 08:06:25.353: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-7wz67 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt kube-proxy-bootstrap-e2e-minion-group-z5pf] Jan 29 08:06:25.353: INFO: Getting external IP address for bootstrap-e2e-minion-group-z5pf Jan 29 08:06:25.353: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-z5pf(34.83.224.154:22) Jan 29 08:06:25.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:25.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:25.880: INFO: ssh prow@34.83.224.154:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 08:06:25.880: INFO: ssh prow@34.83.224.154:22: stdout: "" Jan 29 08:06:25.880: INFO: ssh prow@34.83.224.154:22: stderr: "" Jan 29 08:06:25.880: INFO: ssh prow@34.83.224.154:22: exit code: 0 Jan 29 08:06:25.880: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-z5pf condition Ready to be false Jan 29 08:06:25.922: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:27.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:27.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:27.965: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:29.611: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:29.611: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:30.008: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:31.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:31.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:32.051: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:33.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:33.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:34.094: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:35.778: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:35.779: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:36.139: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:37.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:37.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:38.182: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:39.867: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:39.867: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:40.226: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:41.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:41.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:42.269: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:43.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:43.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:44.315: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:46.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:46.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:46.358: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:48.043: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:48.043: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:48.401: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:50.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:50.087: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:50.443: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:52.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:52.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:52.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:54.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:54.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:54.528: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:56.220: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:56.220: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:56.572: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:58.264: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:58.264: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:58.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:00.308: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:00.308: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:00.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:02.353: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:02.353: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:02.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:04.399: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:04.399: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:04.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:06.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:06.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:06.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:08.488: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:08.488: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:08.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:10.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:10.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:10.873: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:12.577: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:12.577: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:12.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:14.622: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:14.622: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:14.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:16.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:16.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:17.006: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:18.716: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:18.716: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:19.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:20.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:20.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:21.092: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:22.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:22.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:23.152: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:24.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:24.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:25.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:26.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:26.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:27.241: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:28.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:28.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:29.311: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:30.981: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:30.981: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:31.354: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:32.981: INFO: Node bootstrap-e2e-minion-group-ndwb didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:07:32.981: INFO: Node bootstrap-e2e-minion-group-kkkk didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:07:33.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:35.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:37.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:39.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:41.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:43.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:45.678: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:47.721: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:49.764: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:51.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:53.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:55.891: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:57.934: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:59.976: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:02.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:04.062: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:06.104: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:08.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:10.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:12.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:14.310: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:16.353: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:18.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:20.468: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:22.511: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:24.554: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:26.555: INFO: Node bootstrap-e2e-minion-group-z5pf didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:08:26.555: INFO: Node bootstrap-e2e-minion-group-kkkk failed reboot test. Jan 29 08:08:26.555: INFO: Node bootstrap-e2e-minion-group-ndwb failed reboot test. Jan 29 08:08:26.555: INFO: Node bootstrap-e2e-minion-group-z5pf failed reboot test. Jan 29 08:08:26.555: INFO: Executing termination hook on nodes Jan 29 08:08:26.555: INFO: Getting external IP address for bootstrap-e2e-minion-group-kkkk Jan 29 08:08:26.555: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-kkkk(34.168.132.145:22) Jan 29 08:08:27.077: INFO: ssh prow@34.168.132.145:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 08:08:27.077: INFO: ssh prow@34.168.132.145:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 08:05:41 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 08:08:27.077: INFO: ssh prow@34.168.132.145:22: stderr: "" Jan 29 08:08:27.077: INFO: ssh prow@34.168.132.145:22: exit code: 0 Jan 29 08:08:27.077: INFO: Getting external IP address for bootstrap-e2e-minion-group-ndwb Jan 29 08:08:27.077: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-ndwb(104.199.118.209:22) Jan 29 08:08:27.598: INFO: ssh prow@104.199.118.209:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 08:08:27.598: INFO: ssh prow@104.199.118.209:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 08:05:41 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 08:08:27.598: INFO: ssh prow@104.199.118.209:22: stderr: "" Jan 29 08:08:27.598: INFO: ssh prow@104.199.118.209:22: exit code: 0 Jan 29 08:08:27.598: INFO: Getting external IP address for bootstrap-e2e-minion-group-z5pf Jan 29 08:08:27.598: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-z5pf(34.83.224.154:22) Jan 29 08:08:36.473: INFO: ssh prow@34.83.224.154:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 08:08:36.473: INFO: ssh prow@34.83.224.154:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 08:06:35 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 08:08:36.473: INFO: ssh prow@34.83.224.154:22: stderr: "" Jan 29 08:08:36.473: INFO: ssh prow@34.83.224.154:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 08:08:36.474 < Exit [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 08:08:36.474 (3m5.396s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 08:08:36.474 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 08:08:36.474 Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-mxv6m to bootstrap-e2e-minion-group-ndwb Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.022409416s (1.022418953s including waiting) Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-mxv6m Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Readiness probe failed: Get "http://10.64.3.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Container coredns failed liveness probe, will be restarted Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-xx69z to bootstrap-e2e-minion-group-z5pf Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.332873541s (3.332885491s including waiting) Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.6:8181/ready": dial tcp 10.64.2.6:8181: connect: connection refused Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.9:8181/ready": dial tcp 10.64.2.9:8181: connect: connection refused Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-xx69z_kube-system(25c9d77e-fa01-4def-bbd4-fecdd567d047) Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-xx69z_kube-system(25c9d77e-fa01-4def-bbd4-fecdd567d047) Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.21:8181/ready": dial tcp 10.64.2.21:8181: connect: connection refused Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.24:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-xx69z Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-mxv6m Jan 29 08:08:36.523: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 08:08:36.523: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 08:08:36.523: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 08:08:36.523: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 08:08:36.523: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 08:08:36.523: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 08:08:36.523: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 08:08:36.523: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3abc7 became leader Jan 29 08:08:36.523: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_f409 became leader Jan 29 08:08:36.523: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_38576 became leader Jan 29 08:08:36.523: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3d7b3 became leader Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-5fbzh to bootstrap-e2e-minion-group-kkkk Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 676.046267ms (676.059705ms including waiting) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "http://10.64.1.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-5fbzh_kube-system(9571086c-623c-41c0-955d-d460a6dd0ed2) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "http://10.64.1.10:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-dr7js to bootstrap-e2e-minion-group-z5pf Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.980633764s (1.980644127s including waiting) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-dr7js_kube-system(e1a4e00e-3934-4848-9a66-be9d8c0b101f) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Liveness probe failed: Get "http://10.64.2.25:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rnjhw to bootstrap-e2e-minion-group-ndwb Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 637.095564ms (637.1052ms including waiting) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Failed: Error: failed to get sandbox container task: no running task found: task 4ef63a8d4502cb0295416ca4a4f1b807b6a0f2f7059b915d805f859c9f3445b5 not found: not found Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rnjhw_kube-system(4360ba31-7846-46f7-8c84-29877a07a656) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-dr7js Jan 29 08:08:36.523: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-5fbzh Jan 29 08:08:36.523: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rnjhw Jan 29 08:08:36.523: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 08:08:36.523: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 08:08:36.523: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 08:08:36.523: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 08:08:36.523: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 08:08:36.523: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:08:36.523: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 08:08:36.523: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 08:08:36.523: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 08:08:36.523: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 08:08:36.523: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c7e426b9-38fc-4c7f-b4fc-f070398d9e0e became leader Jan 29 08:08:36.523: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2b2c293c-76ee-41be-8eb8-f980d4fa01a1 became leader Jan 29 08:08:36.523: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a720fece-9ceb-41c3-8abf-b82f0fc29f13 became leader Jan 29 08:08:36.523: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_dc6aabca-419e-4b82-881a-a69a55bcf97f became leader Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-sfpjt to bootstrap-e2e-minion-group-z5pf Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.278049196s (3.278058964s including waiting) Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container autoscaler Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container autoscaler Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container autoscaler Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container autoscaler Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container autoscaler Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container autoscaler Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-sfpjt_kube-system(19102d18-f113-4479-a30b-b5e1ffe4f405) Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-sfpjt Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-kkkk_kube-system(4519601567f1523d5567ec952650e112) Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ndwb_kube-system(2d3313b36191cd5f359e56c9a4140294) Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ndwb_kube-system(2d3313b36191cd5f359e56c9a4140294) Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-z5pf_kube-system(d25d661a11fddc5eb34e96f57ad37366) Jan 29 08:08:36.523: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 08:08:36.523: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 08:08:36.523: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 08:08:36.523: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 29 08:08:36.523: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 08:08:36.523: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_fc0b0a85-41c4-4dec-ac86-abf3fce22b5a became leader Jan 29 08:08:36.523: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_990125e5-6222-4b04-8d02-6b89ac6a4c2c became leader Jan 29 08:08:36.523: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_210dbef6-31de-436a-bc0b-7ce6daa2453a became leader Jan 29 08:08:36.523: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_813596ae-d86e-4698-ab4f-55e59d099d5a became leader Jan 29 08:08:36.523: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_21a597fd-2489-494d-8ed4-c939ab76f470 became leader Jan 29 08:08:36.523: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_d4386016-19d3-4013-9169-36494a5b7e73 became leader Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-dr7rr to bootstrap-e2e-minion-group-z5pf Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.685937785s (1.685947189s including waiting) Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container default-http-backend Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container default-http-backend Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container default-http-backend Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container default-http-backend Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Liveness probe failed: Get "http://10.64.2.16:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-dr7rr Jan 29 08:08:36.523: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 08:08:36.523: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 08:08:36.523: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 08:08:36.523: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 08:08:36.523: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 08:08:36.523: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 08:08:36.523: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-67wn6 to bootstrap-e2e-minion-group-ndwb Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 833.493822ms (833.533685ms including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.083002915s (2.083052486s including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-7wz67 to bootstrap-e2e-minion-group-z5pf Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 755.258946ms (755.278068ms including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.86513205s (1.865157696s including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-9b6hn to bootstrap-e2e-minion-group-kkkk Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 809.552191ms (809.582919ms including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.883268685s (1.88329395s including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-pfnzl to bootstrap-e2e-master Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 704.215502ms (704.236581ms including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.907263274s (1.90727094s including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-pfnzl Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-9b6hn Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-7wz67 Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-67wn6 Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-rtlfm to bootstrap-e2e-minion-group-z5pf Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.91715389s (3.917163412s including waiting) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.044685713s (3.044692875s including waiting) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-rtlfm Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-rtlfm Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-rxlfn to bootstrap-e2e-minion-group-kkkk Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.400139366s (1.400164876s including waiting) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.081461821s (1.081475923s including waiting) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.5:10250/livez": dial tcp 10.64.1.5:10250: connect: connection refused Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-rxlfn_kube-system(8d8a9473-ef41-4d81-bfa8-74398e51df6c) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-rxlfn_kube-system(8d8a9473-ef41-4d81-bfa8-74398e51df6c) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-rxlfn Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-rxlfn Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-z5pf Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.869950781s (3.869960439s including waiting) Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container volume-snapshot-controller Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container volume-snapshot-controller Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container volume-snapshot-controller Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f68e02f2-35da-4ff2-81fa-ed586b7b84bb) Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.524: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.524: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 08:08:36.524: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container volume-snapshot-controller Jan 29 08:08:36.524: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container volume-snapshot-controller Jan 29 08:08:36.524: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container volume-snapshot-controller Jan 29 08:08:36.524: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f68e02f2-35da-4ff2-81fa-ed586b7b84bb) Jan 29 08:08:36.524: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 08:08:36.524 (50ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 08:08:36.524 Jan 29 08:08:36.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 08:08:36.566 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 08:08:36.566 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 08:08:36.566 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 08:08:36.566 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 08:08:36.567 STEP: Collecting events from namespace "reboot-9849". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 08:08:36.567 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 08:08:36.608 Jan 29 08:08:36.664: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 08:08:36.664: INFO: Jan 29 08:08:36.707: INFO: Logging node info for node bootstrap-e2e-master Jan 29 08:08:36.748: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master e2d71906-d1d7-40bb-8ec1-0ff5ab8ca7c0 1973 0 2023-01-29 07:56:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 07:56:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 07:56:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 08:07:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.148.246,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4efc3e501c507bb92c88070968370980,SystemUUID:4efc3e50-1c50-7bb9-2c88-070968370980,BootID:60a7bb4c-1e8b-4a40-b89b-863b85f7960f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:08:36.749: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 08:08:36.793: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 08:08:36.848: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container etcd-container ready: true, restart count 0 Jan 29 08:08:36.849: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container kube-apiserver ready: true, restart count 0 Jan 29 08:08:36.849: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container kube-controller-manager ready: false, restart count 4 Jan 29 08:08:36.849: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container kube-scheduler ready: true, restart count 5 Jan 29 08:08:36.849: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 07:55:51 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container kube-addon-manager ready: true, restart count 3 Jan 29 08:08:36.849: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container etcd-container ready: true, restart count 4 Jan 29 08:08:36.849: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container konnectivity-server-container ready: true, restart count 0 Jan 29 08:08:36.849: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 07:55:51 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container l7-lb-controller ready: true, restart count 6 Jan 29 08:08:36.849: INFO: metadata-proxy-v0.1-pfnzl started at 2023-01-29 07:56:38 +0000 UTC (0+2 container statuses recorded) Jan 29 08:08:36.849: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 08:08:36.849: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 08:08:37.018: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 08:08:37.018: INFO: Logging node info for node bootstrap-e2e-minion-group-kkkk Jan 29 08:08:37.060: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-kkkk 5c1faf37-6a52-4cb6-984b-794e065a9e18 1980 0 2023-01-29 07:56:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-kkkk kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 08:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 08:05:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 08:07:42 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-kkkk,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.132.145,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-kkkk.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-kkkk.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:75ae872aa52dfa1c0bd959ea09034479,SystemUUID:75ae872a-a52d-fa1c-0bd9-59ea09034479,BootID:1bff1b86-47f3-4175-b0a6-8c7f181e8951,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:08:37.061: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-kkkk Jan 29 08:08:37.105: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-kkkk Jan 29 08:08:37.162: INFO: konnectivity-agent-5fbzh started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:37.162: INFO: Container konnectivity-agent ready: false, restart count 4 Jan 29 08:08:37.162: INFO: metrics-server-v0.5.2-867b8754b9-rxlfn started at 2023-01-29 07:57:02 +0000 UTC (0+2 container statuses recorded) Jan 29 08:08:37.162: INFO: Container metrics-server ready: false, restart count 8 Jan 29 08:08:37.162: INFO: Container metrics-server-nanny ready: false, restart count 6 Jan 29 08:08:37.162: INFO: kube-proxy-bootstrap-e2e-minion-group-kkkk started at 2023-01-29 07:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:37.162: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 08:08:37.162: INFO: metadata-proxy-v0.1-9b6hn started at 2023-01-29 07:56:23 +0000 UTC (0+2 container statuses recorded) Jan 29 08:08:37.162: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 08:08:37.162: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 08:08:37.316: INFO: Latency metrics for node bootstrap-e2e-minion-group-kkkk Jan 29 08:08:37.316: INFO: Logging node info for node bootstrap-e2e-minion-group-ndwb Jan 29 08:08:37.358: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ndwb a872a196-fba1-4b9d-b495-487aec31cb90 1984 0 2023-01-29 07:56:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ndwb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 08:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 08:05:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 08:07:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-ndwb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:104.199.118.209,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ndwb.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ndwb.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2edc831e1759fe886158939202a48af7,SystemUUID:2edc831e-1759-fe88-6158-939202a48af7,BootID:5d0313ec-818e-4f1a-8e5b-80759c2fb042,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:08:37.358: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ndwb Jan 29 08:08:37.406: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ndwb Jan 29 08:08:37.532: INFO: kube-proxy-bootstrap-e2e-minion-group-ndwb started at 2023-01-29 07:56:23 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:37.532: INFO: Container kube-proxy ready: false, restart count 5 Jan 29 08:08:37.532: INFO: metadata-proxy-v0.1-67wn6 started at 2023-01-29 07:56:24 +0000 UTC (0+2 container statuses recorded) Jan 29 08:08:37.532: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 08:08:37.532: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 08:08:37.532: INFO: konnectivity-agent-rnjhw started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:37.532: INFO: Container konnectivity-agent ready: true, restart count 3 Jan 29 08:08:37.532: INFO: coredns-6846b5b5f-mxv6m started at 2023-01-29 07:56:45 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:37.532: INFO: Container coredns ready: true, restart count 3 Jan 29 08:08:37.717: INFO: Latency metrics for node bootstrap-e2e-minion-group-ndwb Jan 29 08:08:37.717: INFO: Logging node info for node bootstrap-e2e-minion-group-z5pf Jan 29 08:08:37.759: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-z5pf f552791c-eaf5-4935-98c3-f2eaec044ac7 1865 0 2023-01-29 07:56:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-z5pf kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-29 08:01:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 08:03:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 08:05:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-z5pf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.224.154,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-z5pf.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-z5pf.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f2a59ebb63baf48a2871acc042960ed,SystemUUID:0f2a59eb-b63b-af48-a287-1acc042960ed,BootID:2324a0d3-719c-4a04-9037-128191cc6d71,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:08:37.760: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-z5pf Jan 29 08:08:41.332: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-z5pf Jan 29 08:09:09.228: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-z5pf: error trying to reach service: No agent available END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 08:09:09.228 (32.662s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 08:09:09.228 (32.662s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 08:09:09.228 STEP: Destroying namespace "reboot-9849" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 08:09:09.229 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 08:09:09.277 (48ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 08:09:09.277 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 08:09:09.277 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 08:08:36.474from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 08:05:30.789 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 08:05:30.789 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 08:05:30.789 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 08:05:30.789 Jan 29 08:05:30.789: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 08:05:30.79 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 08:05:30.917 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 08:05:30.997 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 08:05:31.078 (289ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 08:05:31.078 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 08:05:31.078 (0s) > Enter [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 08:05:31.078 Jan 29 08:05:31.173: INFO: Getting bootstrap-e2e-minion-group-kkkk Jan 29 08:05:31.173: INFO: Getting bootstrap-e2e-minion-group-z5pf Jan 29 08:05:31.218: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-z5pf condition Ready to be true Jan 29 08:05:31.218: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-kkkk condition Ready to be true Jan 29 08:05:31.223: INFO: Getting bootstrap-e2e-minion-group-ndwb Jan 29 08:05:31.260: INFO: Node bootstrap-e2e-minion-group-kkkk has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-kkkk metadata-proxy-v0.1-9b6hn] Jan 29 08:05:31.260: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-kkkk metadata-proxy-v0.1-9b6hn] Jan 29 08:05:31.260: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-9b6hn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.260: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-kkkk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.261: INFO: Node bootstrap-e2e-minion-group-z5pf has 4 assigned pods with no liveness probes: [metadata-proxy-v0.1-7wz67 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt kube-proxy-bootstrap-e2e-minion-group-z5pf] Jan 29 08:05:31.261: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-7wz67 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt kube-proxy-bootstrap-e2e-minion-group-z5pf] Jan 29 08:05:31.261: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-z5pf" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.261: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.261: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-sfpjt" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.261: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-7wz67" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.265: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-ndwb condition Ready to be true Jan 29 08:05:31.306: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kkkk": Phase="Running", Reason="", readiness=true. Elapsed: 45.133147ms Jan 29 08:05:31.306: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kkkk" satisfied condition "running and ready, or succeeded" Jan 29 08:05:31.306: INFO: Pod "kube-dns-autoscaler-5f6455f985-sfpjt": Phase="Running", Reason="", readiness=true. Elapsed: 45.26185ms Jan 29 08:05:31.306: INFO: Pod "kube-dns-autoscaler-5f6455f985-sfpjt" satisfied condition "running and ready, or succeeded" Jan 29 08:05:31.307: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.654821ms Jan 29 08:05:31.307: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:05:31.308: INFO: Pod "metadata-proxy-v0.1-7wz67": Phase="Running", Reason="", readiness=true. Elapsed: 46.829954ms Jan 29 08:05:31.308: INFO: Pod "metadata-proxy-v0.1-7wz67" satisfied condition "running and ready, or succeeded" Jan 29 08:05:31.308: INFO: Pod "metadata-proxy-v0.1-9b6hn": Phase="Running", Reason="", readiness=true. Elapsed: 47.40828ms Jan 29 08:05:31.308: INFO: Pod "metadata-proxy-v0.1-9b6hn" satisfied condition "running and ready, or succeeded" Jan 29 08:05:31.308: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-kkkk metadata-proxy-v0.1-9b6hn] Jan 29 08:05:31.308: INFO: Getting external IP address for bootstrap-e2e-minion-group-kkkk Jan 29 08:05:31.308: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-kkkk(34.168.132.145:22) Jan 29 08:05:31.308: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=false. Elapsed: 47.723076ms Jan 29 08:05:31.308: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-z5pf' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC }] Jan 29 08:05:31.309: INFO: Node bootstrap-e2e-minion-group-ndwb has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:05:31.309: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:05:31.309: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-67wn6" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.309: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-ndwb" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:05:31.351: INFO: Pod "metadata-proxy-v0.1-67wn6": Phase="Running", Reason="", readiness=true. Elapsed: 42.350087ms Jan 29 08:05:31.351: INFO: Pod "metadata-proxy-v0.1-67wn6" satisfied condition "running and ready, or succeeded" Jan 29 08:05:31.351: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ndwb": Phase="Running", Reason="", readiness=true. Elapsed: 42.210919ms Jan 29 08:05:31.351: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ndwb" satisfied condition "running and ready, or succeeded" Jan 29 08:05:31.351: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:05:31.351: INFO: Getting external IP address for bootstrap-e2e-minion-group-ndwb Jan 29 08:05:31.351: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-ndwb(104.199.118.209:22) Jan 29 08:05:31.832: INFO: ssh prow@34.168.132.145:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 08:05:31.832: INFO: ssh prow@34.168.132.145:22: stdout: "" Jan 29 08:05:31.832: INFO: ssh prow@34.168.132.145:22: stderr: "" Jan 29 08:05:31.832: INFO: ssh prow@34.168.132.145:22: exit code: 0 Jan 29 08:05:31.832: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-kkkk condition Ready to be false Jan 29 08:05:31.872: INFO: ssh prow@104.199.118.209:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 08:05:31.872: INFO: ssh prow@104.199.118.209:22: stdout: "" Jan 29 08:05:31.872: INFO: ssh prow@104.199.118.209:22: stderr: "" Jan 29 08:05:31.872: INFO: ssh prow@104.199.118.209:22: exit code: 0 Jan 29 08:05:31.872: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-ndwb condition Ready to be false Jan 29 08:05:31.874: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:31.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:33.350: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.089166502s Jan 29 08:05:33.350: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:05:33.351: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=false. Elapsed: 2.090805957s Jan 29 08:05:33.351: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-z5pf' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC }] Jan 29 08:05:33.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:33.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:35.349: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.088520681s Jan 29 08:05:35.349: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:05:35.352: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=false. Elapsed: 4.091056548s Jan 29 08:05:35.352: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-z5pf' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC }] Jan 29 08:05:35.983: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:35.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:05:37.349: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.088260209s Jan 29 08:05:37.349: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:05:37.351: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=false. Elapsed: 6.090632557s Jan 29 08:05:37.351: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-z5pf' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC }] Jan 29 08:06:24.812: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 53.551147182s Jan 29 08:06:24.812: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:04:41 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:06:24.812: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=false. Elapsed: 53.55149063s Jan 29 08:06:24.812: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-z5pf' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:05:24 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:23 +0000 UTC }] Jan 29 08:06:25.350: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 54.089177362s Jan 29 08:06:25.350: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 08:06:25.353: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=true. Elapsed: 54.091887156s Jan 29 08:06:25.353: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf" satisfied condition "running and ready, or succeeded" Jan 29 08:06:25.353: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-7wz67 volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt kube-proxy-bootstrap-e2e-minion-group-z5pf] Jan 29 08:06:25.353: INFO: Getting external IP address for bootstrap-e2e-minion-group-z5pf Jan 29 08:06:25.353: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-z5pf(34.83.224.154:22) Jan 29 08:06:25.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:25.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:25.880: INFO: ssh prow@34.83.224.154:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 08:06:25.880: INFO: ssh prow@34.83.224.154:22: stdout: "" Jan 29 08:06:25.880: INFO: ssh prow@34.83.224.154:22: stderr: "" Jan 29 08:06:25.880: INFO: ssh prow@34.83.224.154:22: exit code: 0 Jan 29 08:06:25.880: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-z5pf condition Ready to be false Jan 29 08:06:25.922: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:27.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:27.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:27.965: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:29.611: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:29.611: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:30.008: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:31.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:31.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:32.051: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:33.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:33.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:34.094: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:35.778: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:35.779: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:36.139: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:37.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:37.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:38.182: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:39.867: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:39.867: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:40.226: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:41.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:41.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:42.269: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:43.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:43.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:44.315: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:46.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:46.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:46.358: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:48.043: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:48.043: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:48.401: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:50.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:50.087: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:50.443: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:52.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:52.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:52.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:54.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:54.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:54.528: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:56.220: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:56.220: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:56.572: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:58.264: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:58.264: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:06:58.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:00.308: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:00.308: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:00.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:02.353: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:02.353: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:02.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:04.399: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:04.399: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:04.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:06.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:06.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:06.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:08.488: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:08.488: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:08.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:10.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:10.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:10.873: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:12.577: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:12.577: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:12.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:14.622: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:14.622: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:14.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:16.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:16.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:17.006: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:18.716: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:18.716: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:19.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:20.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:20.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:21.092: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:22.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:22.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:23.152: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:24.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:24.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:25.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:26.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:26.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:27.241: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:28.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:28.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:29.311: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:30.981: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:30.981: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:31.354: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:32.981: INFO: Node bootstrap-e2e-minion-group-ndwb didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:07:32.981: INFO: Node bootstrap-e2e-minion-group-kkkk didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:07:33.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:35.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:37.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:39.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:41.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:43.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:45.678: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:47.721: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:49.764: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:51.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:53.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:55.891: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:57.934: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:07:59.976: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:02.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:04.062: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:06.104: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:08.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:10.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:12.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:14.310: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:16.353: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:18.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:20.468: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:22.511: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:24.554: INFO: Condition Ready of node bootstrap-e2e-minion-group-z5pf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:08:26.555: INFO: Node bootstrap-e2e-minion-group-z5pf didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:08:26.555: INFO: Node bootstrap-e2e-minion-group-kkkk failed reboot test. Jan 29 08:08:26.555: INFO: Node bootstrap-e2e-minion-group-ndwb failed reboot test. Jan 29 08:08:26.555: INFO: Node bootstrap-e2e-minion-group-z5pf failed reboot test. Jan 29 08:08:26.555: INFO: Executing termination hook on nodes Jan 29 08:08:26.555: INFO: Getting external IP address for bootstrap-e2e-minion-group-kkkk Jan 29 08:08:26.555: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-kkkk(34.168.132.145:22) Jan 29 08:08:27.077: INFO: ssh prow@34.168.132.145:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 08:08:27.077: INFO: ssh prow@34.168.132.145:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 08:05:41 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 08:08:27.077: INFO: ssh prow@34.168.132.145:22: stderr: "" Jan 29 08:08:27.077: INFO: ssh prow@34.168.132.145:22: exit code: 0 Jan 29 08:08:27.077: INFO: Getting external IP address for bootstrap-e2e-minion-group-ndwb Jan 29 08:08:27.077: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-ndwb(104.199.118.209:22) Jan 29 08:08:27.598: INFO: ssh prow@104.199.118.209:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 08:08:27.598: INFO: ssh prow@104.199.118.209:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 08:05:41 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 08:08:27.598: INFO: ssh prow@104.199.118.209:22: stderr: "" Jan 29 08:08:27.598: INFO: ssh prow@104.199.118.209:22: exit code: 0 Jan 29 08:08:27.598: INFO: Getting external IP address for bootstrap-e2e-minion-group-z5pf Jan 29 08:08:27.598: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-z5pf(34.83.224.154:22) Jan 29 08:08:36.473: INFO: ssh prow@34.83.224.154:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 08:08:36.473: INFO: ssh prow@34.83.224.154:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 08:06:35 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 08:08:36.473: INFO: ssh prow@34.83.224.154:22: stderr: "" Jan 29 08:08:36.473: INFO: ssh prow@34.83.224.154:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 08:08:36.474 < Exit [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 08:08:36.474 (3m5.396s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 08:08:36.474 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 08:08:36.474 Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-mxv6m to bootstrap-e2e-minion-group-ndwb Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.022409416s (1.022418953s including waiting) Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-mxv6m Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Readiness probe failed: Get "http://10.64.3.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Container coredns failed liveness probe, will be restarted Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-xx69z to bootstrap-e2e-minion-group-z5pf Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.332873541s (3.332885491s including waiting) Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.6:8181/ready": dial tcp 10.64.2.6:8181: connect: connection refused Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.9:8181/ready": dial tcp 10.64.2.9:8181: connect: connection refused Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-xx69z_kube-system(25c9d77e-fa01-4def-bbd4-fecdd567d047) Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container coredns Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-xx69z_kube-system(25c9d77e-fa01-4def-bbd4-fecdd567d047) Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.21:8181/ready": dial tcp 10.64.2.21:8181: connect: connection refused Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.24:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-xx69z Jan 29 08:08:36.523: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-mxv6m Jan 29 08:08:36.523: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 08:08:36.523: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 08:08:36.523: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 08:08:36.523: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 08:08:36.523: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 08:08:36.523: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 08:08:36.523: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 08:08:36.523: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3abc7 became leader Jan 29 08:08:36.523: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_f409 became leader Jan 29 08:08:36.523: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_38576 became leader Jan 29 08:08:36.523: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3d7b3 became leader Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-5fbzh to bootstrap-e2e-minion-group-kkkk Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 676.046267ms (676.059705ms including waiting) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "http://10.64.1.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-5fbzh_kube-system(9571086c-623c-41c0-955d-d460a6dd0ed2) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "http://10.64.1.10:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-dr7js to bootstrap-e2e-minion-group-z5pf Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.980633764s (1.980644127s including waiting) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-dr7js_kube-system(e1a4e00e-3934-4848-9a66-be9d8c0b101f) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Liveness probe failed: Get "http://10.64.2.25:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rnjhw to bootstrap-e2e-minion-group-ndwb Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 637.095564ms (637.1052ms including waiting) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container konnectivity-agent Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Failed: Error: failed to get sandbox container task: no running task found: task 4ef63a8d4502cb0295416ca4a4f1b807b6a0f2f7059b915d805f859c9f3445b5 not found: not found Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rnjhw_kube-system(4360ba31-7846-46f7-8c84-29877a07a656) Jan 29 08:08:36.523: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-dr7js Jan 29 08:08:36.523: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-5fbzh Jan 29 08:08:36.523: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rnjhw Jan 29 08:08:36.523: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 08:08:36.523: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 08:08:36.523: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 08:08:36.523: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 08:08:36.523: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 08:08:36.523: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:08:36.523: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 08:08:36.523: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 08:08:36.523: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 08:08:36.523: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 08:08:36.523: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c7e426b9-38fc-4c7f-b4fc-f070398d9e0e became leader Jan 29 08:08:36.523: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2b2c293c-76ee-41be-8eb8-f980d4fa01a1 became leader Jan 29 08:08:36.523: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a720fece-9ceb-41c3-8abf-b82f0fc29f13 became leader Jan 29 08:08:36.523: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_dc6aabca-419e-4b82-881a-a69a55bcf97f became leader Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-sfpjt to bootstrap-e2e-minion-group-z5pf Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.278049196s (3.278058964s including waiting) Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container autoscaler Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container autoscaler Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container autoscaler Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container autoscaler Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container autoscaler Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container autoscaler Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-sfpjt_kube-system(19102d18-f113-4479-a30b-b5e1ffe4f405) Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-sfpjt Jan 29 08:08:36.523: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-kkkk_kube-system(4519601567f1523d5567ec952650e112) Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ndwb_kube-system(2d3313b36191cd5f359e56c9a4140294) Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ndwb_kube-system(2d3313b36191cd5f359e56c9a4140294) Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container kube-proxy Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-z5pf_kube-system(d25d661a11fddc5eb34e96f57ad37366) Jan 29 08:08:36.523: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:08:36.523: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 08:08:36.523: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 08:08:36.523: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 08:08:36.523: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 29 08:08:36.523: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 08:08:36.523: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_fc0b0a85-41c4-4dec-ac86-abf3fce22b5a became leader Jan 29 08:08:36.523: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_990125e5-6222-4b04-8d02-6b89ac6a4c2c became leader Jan 29 08:08:36.523: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_210dbef6-31de-436a-bc0b-7ce6daa2453a became leader Jan 29 08:08:36.523: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_813596ae-d86e-4698-ab4f-55e59d099d5a became leader Jan 29 08:08:36.523: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_21a597fd-2489-494d-8ed4-c939ab76f470 became leader Jan 29 08:08:36.523: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_d4386016-19d3-4013-9169-36494a5b7e73 became leader Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-dr7rr to bootstrap-e2e-minion-group-z5pf Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.685937785s (1.685947189s including waiting) Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container default-http-backend Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container default-http-backend Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container default-http-backend Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container default-http-backend Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Liveness probe failed: Get "http://10.64.2.16:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 08:08:36.523: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-dr7rr Jan 29 08:08:36.523: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 08:08:36.523: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 08:08:36.523: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 08:08:36.523: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 08:08:36.523: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 08:08:36.523: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 08:08:36.523: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-67wn6 to bootstrap-e2e-minion-group-ndwb Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 833.493822ms (833.533685ms including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.083002915s (2.083052486s including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-7wz67 to bootstrap-e2e-minion-group-z5pf Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 755.258946ms (755.278068ms including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.86513205s (1.865157696s including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-9b6hn to bootstrap-e2e-minion-group-kkkk Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 809.552191ms (809.582919ms including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.883268685s (1.88329395s including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-pfnzl to bootstrap-e2e-master Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 704.215502ms (704.236581ms including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.907263274s (1.90727094s including waiting) Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-pfnzl Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-9b6hn Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-7wz67 Jan 29 08:08:36.523: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-67wn6 Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-rtlfm to bootstrap-e2e-minion-group-z5pf Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.91715389s (3.917163412s including waiting) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.044685713s (3.044692875s including waiting) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-rtlfm Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-rtlfm Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-rxlfn to bootstrap-e2e-minion-group-kkkk Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.400139366s (1.400164876s including waiting) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.081461821s (1.081475923s including waiting) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server-nanny Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.5:10250/livez": dial tcp 10.64.1.5:10250: connect: connection refused Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-rxlfn_kube-system(8d8a9473-ef41-4d81-bfa8-74398e51df6c) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-rxlfn_kube-system(8d8a9473-ef41-4d81-bfa8-74398e51df6c) Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-rxlfn Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-rxlfn Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 08:08:36.523: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-z5pf Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.869950781s (3.869960439s including waiting) Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container volume-snapshot-controller Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container volume-snapshot-controller Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container volume-snapshot-controller Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f68e02f2-35da-4ff2-81fa-ed586b7b84bb) Jan 29 08:08:36.523: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 08:08:36.524: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:08:36.524: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 08:08:36.524: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container volume-snapshot-controller Jan 29 08:08:36.524: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container volume-snapshot-controller Jan 29 08:08:36.524: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container volume-snapshot-controller Jan 29 08:08:36.524: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f68e02f2-35da-4ff2-81fa-ed586b7b84bb) Jan 29 08:08:36.524: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 08:08:36.524 (50ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 08:08:36.524 Jan 29 08:08:36.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 08:08:36.566 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 08:08:36.566 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 08:08:36.566 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 08:08:36.566 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 08:08:36.567 STEP: Collecting events from namespace "reboot-9849". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 08:08:36.567 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 08:08:36.608 Jan 29 08:08:36.664: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 08:08:36.664: INFO: Jan 29 08:08:36.707: INFO: Logging node info for node bootstrap-e2e-master Jan 29 08:08:36.748: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master e2d71906-d1d7-40bb-8ec1-0ff5ab8ca7c0 1973 0 2023-01-29 07:56:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 07:56:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 07:56:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 08:07:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.148.246,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4efc3e501c507bb92c88070968370980,SystemUUID:4efc3e50-1c50-7bb9-2c88-070968370980,BootID:60a7bb4c-1e8b-4a40-b89b-863b85f7960f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:08:36.749: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 08:08:36.793: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 08:08:36.848: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container etcd-container ready: true, restart count 0 Jan 29 08:08:36.849: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container kube-apiserver ready: true, restart count 0 Jan 29 08:08:36.849: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container kube-controller-manager ready: false, restart count 4 Jan 29 08:08:36.849: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container kube-scheduler ready: true, restart count 5 Jan 29 08:08:36.849: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 07:55:51 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container kube-addon-manager ready: true, restart count 3 Jan 29 08:08:36.849: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container etcd-container ready: true, restart count 4 Jan 29 08:08:36.849: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container konnectivity-server-container ready: true, restart count 0 Jan 29 08:08:36.849: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 07:55:51 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:36.849: INFO: Container l7-lb-controller ready: true, restart count 6 Jan 29 08:08:36.849: INFO: metadata-proxy-v0.1-pfnzl started at 2023-01-29 07:56:38 +0000 UTC (0+2 container statuses recorded) Jan 29 08:08:36.849: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 08:08:36.849: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 08:08:37.018: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 08:08:37.018: INFO: Logging node info for node bootstrap-e2e-minion-group-kkkk Jan 29 08:08:37.060: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-kkkk 5c1faf37-6a52-4cb6-984b-794e065a9e18 1980 0 2023-01-29 07:56:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-kkkk kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 08:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 08:05:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 08:07:42 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-kkkk,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:22 +0000 UTC,LastTransitionTime:2023-01-29 07:59:54 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.132.145,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-kkkk.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-kkkk.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:75ae872aa52dfa1c0bd959ea09034479,SystemUUID:75ae872a-a52d-fa1c-0bd9-59ea09034479,BootID:1bff1b86-47f3-4175-b0a6-8c7f181e8951,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:08:37.061: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-kkkk Jan 29 08:08:37.105: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-kkkk Jan 29 08:08:37.162: INFO: konnectivity-agent-5fbzh started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:37.162: INFO: Container konnectivity-agent ready: false, restart count 4 Jan 29 08:08:37.162: INFO: metrics-server-v0.5.2-867b8754b9-rxlfn started at 2023-01-29 07:57:02 +0000 UTC (0+2 container statuses recorded) Jan 29 08:08:37.162: INFO: Container metrics-server ready: false, restart count 8 Jan 29 08:08:37.162: INFO: Container metrics-server-nanny ready: false, restart count 6 Jan 29 08:08:37.162: INFO: kube-proxy-bootstrap-e2e-minion-group-kkkk started at 2023-01-29 07:56:22 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:37.162: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 08:08:37.162: INFO: metadata-proxy-v0.1-9b6hn started at 2023-01-29 07:56:23 +0000 UTC (0+2 container statuses recorded) Jan 29 08:08:37.162: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 08:08:37.162: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 08:08:37.316: INFO: Latency metrics for node bootstrap-e2e-minion-group-kkkk Jan 29 08:08:37.316: INFO: Logging node info for node bootstrap-e2e-minion-group-ndwb Jan 29 08:08:37.358: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ndwb a872a196-fba1-4b9d-b495-487aec31cb90 1984 0 2023-01-29 07:56:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ndwb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 08:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 08:05:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 08:07:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-ndwb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:23 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:104.199.118.209,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ndwb.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ndwb.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2edc831e1759fe886158939202a48af7,SystemUUID:2edc831e-1759-fe88-6158-939202a48af7,BootID:5d0313ec-818e-4f1a-8e5b-80759c2fb042,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:08:37.358: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ndwb Jan 29 08:08:37.406: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ndwb Jan 29 08:08:37.532: INFO: kube-proxy-bootstrap-e2e-minion-group-ndwb started at 2023-01-29 07:56:23 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:37.532: INFO: Container kube-proxy ready: false, restart count 5 Jan 29 08:08:37.532: INFO: metadata-proxy-v0.1-67wn6 started at 2023-01-29 07:56:24 +0000 UTC (0+2 container statuses recorded) Jan 29 08:08:37.532: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 08:08:37.532: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 08:08:37.532: INFO: konnectivity-agent-rnjhw started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:37.532: INFO: Container konnectivity-agent ready: true, restart count 3 Jan 29 08:08:37.532: INFO: coredns-6846b5b5f-mxv6m started at 2023-01-29 07:56:45 +0000 UTC (0+1 container statuses recorded) Jan 29 08:08:37.532: INFO: Container coredns ready: true, restart count 3 Jan 29 08:08:37.717: INFO: Latency metrics for node bootstrap-e2e-minion-group-ndwb Jan 29 08:08:37.717: INFO: Logging node info for node bootstrap-e2e-minion-group-z5pf Jan 29 08:08:37.759: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-z5pf f552791c-eaf5-4935-98c3-f2eaec044ac7 1865 0 2023-01-29 07:56:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-z5pf kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-29 08:01:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 08:03:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 08:05:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-z5pf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:05:28 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:03:04 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.224.154,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-z5pf.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-z5pf.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f2a59ebb63baf48a2871acc042960ed,SystemUUID:0f2a59eb-b63b-af48-a287-1acc042960ed,BootID:2324a0d3-719c-4a04-9037-128191cc6d71,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:08:37.760: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-z5pf Jan 29 08:08:41.332: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-z5pf Jan 29 08:09:09.228: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-z5pf: error trying to reach service: No agent available END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 08:09:09.228 (32.662s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 08:09:09.228 (32.662s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 08:09:09.228 STEP: Destroying namespace "reboot-9849" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 08:09:09.229 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 08:09:09.277 (48ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 08:09:09.277 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 08:09:09.277 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 08:11:40.364from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 08:09:09.329 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 08:09:09.329 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 08:09:09.329 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 08:09:09.33 Jan 29 08:09:09.330: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 08:09:09.331 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 08:09:37.996 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 08:09:38.11 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 08:09:38.19 (28.86s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 08:09:38.19 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 08:09:38.19 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 08:09:38.19 Jan 29 08:09:38.285: INFO: Getting bootstrap-e2e-minion-group-kkkk Jan 29 08:09:38.285: INFO: Getting bootstrap-e2e-minion-group-z5pf Jan 29 08:09:38.285: INFO: Getting bootstrap-e2e-minion-group-ndwb Jan 29 08:09:38.362: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-ndwb condition Ready to be true Jan 29 08:09:38.362: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-z5pf condition Ready to be true Jan 29 08:09:38.362: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-kkkk condition Ready to be true Jan 29 08:09:38.406: INFO: Node bootstrap-e2e-minion-group-ndwb has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:09:38.406: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:09:38.406: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-67wn6" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.406: INFO: Node bootstrap-e2e-minion-group-z5pf has 4 assigned pods with no liveness probes: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt kube-proxy-bootstrap-e2e-minion-group-z5pf metadata-proxy-v0.1-7wz67] Jan 29 08:09:38.406: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt kube-proxy-bootstrap-e2e-minion-group-z5pf metadata-proxy-v0.1-7wz67] Jan 29 08:09:38.406: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-7wz67" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.406: INFO: Node bootstrap-e2e-minion-group-kkkk has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-kkkk metadata-proxy-v0.1-9b6hn] Jan 29 08:09:38.406: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-kkkk metadata-proxy-v0.1-9b6hn] Jan 29 08:09:38.406: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-9b6hn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.406: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-ndwb" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.407: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.407: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-sfpjt" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.407: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-z5pf" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.407: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-kkkk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.450: INFO: Pod "metadata-proxy-v0.1-67wn6": Phase="Running", Reason="", readiness=true. Elapsed: 43.992347ms Jan 29 08:09:38.450: INFO: Pod "metadata-proxy-v0.1-67wn6" satisfied condition "running and ready, or succeeded" Jan 29 08:09:38.453: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.10167ms Jan 29 08:09:38.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:09:38.454: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=true. Elapsed: 47.292192ms Jan 29 08:09:38.454: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf" satisfied condition "running and ready, or succeeded" Jan 29 08:09:38.454: INFO: Pod "kube-dns-autoscaler-5f6455f985-sfpjt": Phase="Running", Reason="", readiness=true. Elapsed: 47.456844ms Jan 29 08:09:38.454: INFO: Pod "kube-dns-autoscaler-5f6455f985-sfpjt" satisfied condition "running and ready, or succeeded" Jan 29 08:09:38.454: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kkkk": Phase="Running", Reason="", readiness=true. Elapsed: 47.323382ms Jan 29 08:09:38.454: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kkkk" satisfied condition "running and ready, or succeeded" Jan 29 08:09:38.454: INFO: Pod "metadata-proxy-v0.1-7wz67": Phase="Running", Reason="", readiness=true. Elapsed: 47.969144ms Jan 29 08:09:38.454: INFO: Pod "metadata-proxy-v0.1-7wz67" satisfied condition "running and ready, or succeeded" Jan 29 08:09:38.454: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ndwb": Phase="Running", Reason="", readiness=true. Elapsed: 47.815947ms Jan 29 08:09:38.454: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ndwb" satisfied condition "running and ready, or succeeded" Jan 29 08:09:38.454: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:09:38.454: INFO: Getting external IP address for bootstrap-e2e-minion-group-ndwb Jan 29 08:09:38.454: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-ndwb(104.199.118.209:22) Jan 29 08:09:38.455: INFO: Pod "metadata-proxy-v0.1-9b6hn": Phase="Running", Reason="", readiness=true. Elapsed: 48.281199ms Jan 29 08:09:38.455: INFO: Pod "metadata-proxy-v0.1-9b6hn" satisfied condition "running and ready, or succeeded" Jan 29 08:09:38.455: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-kkkk metadata-proxy-v0.1-9b6hn] Jan 29 08:09:38.455: INFO: Getting external IP address for bootstrap-e2e-minion-group-kkkk Jan 29 08:09:38.455: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-kkkk(34.168.132.145:22) Jan 29 08:09:38.972: INFO: ssh prow@104.199.118.209:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 08:09:38.972: INFO: ssh prow@104.199.118.209:22: stdout: "" Jan 29 08:09:38.972: INFO: ssh prow@104.199.118.209:22: stderr: "" Jan 29 08:09:38.972: INFO: ssh prow@104.199.118.209:22: exit code: 0 Jan 29 08:09:38.972: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-ndwb condition Ready to be false Jan 29 08:09:38.980: INFO: ssh prow@34.168.132.145:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 08:09:38.980: INFO: ssh prow@34.168.132.145:22: stdout: "" Jan 29 08:09:38.980: INFO: ssh prow@34.168.132.145:22: stderr: "" Jan 29 08:09:38.980: INFO: ssh prow@34.168.132.145:22: exit code: 0 Jan 29 08:09:38.980: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-kkkk condition Ready to be false Jan 29 08:09:39.014: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:09:39.022: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:09:40.495: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.088818909s Jan 29 08:09:40.495: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:09:41.057: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:09:41.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:09:42.494: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.087919434s Jan 29 08:09:42.495: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:09:43.102: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:09:43.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:09:44.530: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.123961612s Jan 29 08:09:44.531: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:09:45.149: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:09:45.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:10:46.496: INFO: Encountered non-retryable error while getting pod kube-system/volume-snapshot-controller-0: Get "https://34.168.148.246/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0": stream error: stream ID 1919; INTERNAL_ERROR; received from peer Jan 29 08:10:46.496: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 29 08:10:46.496: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt kube-proxy-bootstrap-e2e-minion-group-z5pf metadata-proxy-v0.1-7wz67] Jan 29 08:10:46.496: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 07:56:37 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:07:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:07:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 07:56:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.2.29 PodIPs:[{IP:10.64.2.29}] StartTime:2023-01-29 07:56:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 2m40s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(f68e02f2-35da-4ff2-81fa-ed586b7b84bb),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:06:01 +0000 UTC,FinishedAt:2023-01-29 08:07:15 +0000 UTC,ContainerID:containerd://0de4853acda0dbb798fdd22f658c13f02a1f2a071ca83f67365af60295846370,}} Ready:false RestartCount:7 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://0de4853acda0dbb798fdd22f658c13f02a1f2a071ca83f67365af60295846370 Started:0xc00253042f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 29 08:10:47.191: INFO: Couldn't get node bootstrap-e2e-minion-group-ndwb Jan 29 08:10:47.197: INFO: Couldn't get node bootstrap-e2e-minion-group-kkkk Jan 29 08:11:32.233: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:32.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:34.276: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:34.278: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:36.319: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:36.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:37.828: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: I0129 08:09:56.556219 1 main.go:125] Version: v6.1.0 I0129 08:09:56.557595 1 main.go:168] Metrics path successfully registered at /metrics I0129 08:09:56.557804 1 main.go:174] Start NewCSISnapshotController with kubeconfig [] resyncPeriod [15m0s] E0129 08:10:56.567012 1 main.go:86] Failed to list v1 volumesnapshots with error=Get "https://10.0.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshots": stream error: stream ID 1; INTERNAL_ERROR; received from peer Jan 29 08:11:37.828: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: I0129 08:09:56.556219 1 main.go:125] Version: v6.1.0 I0129 08:09:56.557595 1 main.go:168] Metrics path successfully registered at /metrics I0129 08:09:56.557804 1 main.go:174] Start NewCSISnapshotController with kubeconfig [] resyncPeriod [15m0s] E0129 08:10:56.567012 1 main.go:86] Failed to list v1 volumesnapshots with error=Get "https://10.0.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshots": stream error: stream ID 1; INTERNAL_ERROR; received from peer Jan 29 08:11:38.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:38.363: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:40.361: INFO: Node bootstrap-e2e-minion-group-kkkk didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:11:40.363: INFO: Node bootstrap-e2e-minion-group-ndwb didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:11:40.363: INFO: Node bootstrap-e2e-minion-group-kkkk failed reboot test. Jan 29 08:11:40.363: INFO: Node bootstrap-e2e-minion-group-ndwb failed reboot test. Jan 29 08:11:40.363: INFO: Node bootstrap-e2e-minion-group-z5pf failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 08:11:40.364 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 08:11:40.364 (2m2.174s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 08:11:40.364 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 08:11:40.364 Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-mxv6m to bootstrap-e2e-minion-group-ndwb Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.022409416s (1.022418953s including waiting) Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container coredns Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container coredns Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container coredns Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container coredns Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-mxv6m Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Readiness probe failed: Get "http://10.64.3.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Container coredns failed liveness probe, will be restarted Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-xx69z to bootstrap-e2e-minion-group-z5pf Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.332873541s (3.332885491s including waiting) Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container coredns Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container coredns Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container coredns Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.6:8181/ready": dial tcp 10.64.2.6:8181: connect: connection refused Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.9:8181/ready": dial tcp 10.64.2.9:8181: connect: connection refused Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-xx69z_kube-system(25c9d77e-fa01-4def-bbd4-fecdd567d047) Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container coredns Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container coredns Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container coredns Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-xx69z_kube-system(25c9d77e-fa01-4def-bbd4-fecdd567d047) Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.21:8181/ready": dial tcp 10.64.2.21:8181: connect: connection refused Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.24:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-xx69z Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-mxv6m Jan 29 08:11:40.415: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 08:11:40.415: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 08:11:40.415: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 08:11:40.415: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 08:11:40.415: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 08:11:40.415: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 08:11:40.415: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 08:11:40.415: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 08:11:40.415: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 08:11:40.415: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 08:11:40.415: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 08:11:40.415: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3abc7 became leader Jan 29 08:11:40.415: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_f409 became leader Jan 29 08:11:40.415: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_38576 became leader Jan 29 08:11:40.415: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3d7b3 became leader Jan 29 08:11:40.415: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_15ee3 became leader Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-5fbzh to bootstrap-e2e-minion-group-kkkk Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 676.046267ms (676.059705ms including waiting) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "http://10.64.1.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-5fbzh_kube-system(9571086c-623c-41c0-955d-d460a6dd0ed2) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "http://10.64.1.10:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-dr7js to bootstrap-e2e-minion-group-z5pf Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.980633764s (1.980644127s including waiting) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-dr7js_kube-system(e1a4e00e-3934-4848-9a66-be9d8c0b101f) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Liveness probe failed: Get "http://10.64.2.25:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rnjhw to bootstrap-e2e-minion-group-ndwb Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 637.095564ms (637.1052ms including waiting) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Failed: Error: failed to get sandbox container task: no running task found: task 4ef63a8d4502cb0295416ca4a4f1b807b6a0f2f7059b915d805f859c9f3445b5 not found: not found Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rnjhw_kube-system(4360ba31-7846-46f7-8c84-29877a07a656) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-dr7js Jan 29 08:11:40.415: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-5fbzh Jan 29 08:11:40.415: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rnjhw Jan 29 08:11:40.415: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 08:11:40.415: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 08:11:40.415: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 08:11:40.415: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 08:11:40.415: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 08:11:40.415: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 08:11:40.415: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 08:11:40.415: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 08:11:40.415: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 08:11:40.415: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:11:40.415: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 08:11:40.415: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 08:11:40.415: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 08:11:40.415: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 08:11:40.415: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c7e426b9-38fc-4c7f-b4fc-f070398d9e0e became leader Jan 29 08:11:40.415: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2b2c293c-76ee-41be-8eb8-f980d4fa01a1 became leader Jan 29 08:11:40.415: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a720fece-9ceb-41c3-8abf-b82f0fc29f13 became leader Jan 29 08:11:40.415: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_dc6aabca-419e-4b82-881a-a69a55bcf97f became leader Jan 29 08:11:40.415: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_9109c6d5-4ebf-4db3-a5e7-2869de648c91 became leader Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-sfpjt to bootstrap-e2e-minion-group-z5pf Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.278049196s (3.278058964s including waiting) Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container autoscaler Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container autoscaler Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container autoscaler Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container autoscaler Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container autoscaler Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container autoscaler Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-sfpjt_kube-system(19102d18-f113-4479-a30b-b5e1ffe4f405) Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-sfpjt Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-kkkk_kube-system(4519601567f1523d5567ec952650e112) Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ndwb_kube-system(2d3313b36191cd5f359e56c9a4140294) Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ndwb_kube-system(2d3313b36191cd5f359e56c9a4140294) Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-z5pf_kube-system(d25d661a11fddc5eb34e96f57ad37366) Jan 29 08:11:40.415: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 08:11:40.415: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 08:11:40.415: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 08:11:40.415: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 29 08:11:40.415: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 08:11:40.415: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_fc0b0a85-41c4-4dec-ac86-abf3fce22b5a became leader Jan 29 08:11:40.415: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_990125e5-6222-4b04-8d02-6b89ac6a4c2c became leader Jan 29 08:11:40.415: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_210dbef6-31de-436a-bc0b-7ce6daa2453a became leader Jan 29 08:11:40.415: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_813596ae-d86e-4698-ab4f-55e59d099d5a became leader Jan 29 08:11:40.415: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_21a597fd-2489-494d-8ed4-c939ab76f470 became leader Jan 29 08:11:40.415: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_d4386016-19d3-4013-9169-36494a5b7e73 became leader Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-dr7rr to bootstrap-e2e-minion-group-z5pf Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.685937785s (1.685947189s including waiting) Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container default-http-backend Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container default-http-backend Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container default-http-backend Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container default-http-backend Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Liveness probe failed: Get "http://10.64.2.16:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-dr7rr Jan 29 08:11:40.415: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 08:11:40.415: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 08:11:40.415: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 08:11:40.415: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 08:11:40.415: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 08:11:40.415: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 08:11:40.415: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-67wn6 to bootstrap-e2e-minion-group-ndwb Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 833.493822ms (833.533685ms including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.083002915s (2.083052486s including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-7wz67 to bootstrap-e2e-minion-group-z5pf Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 755.258946ms (755.278068ms including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.86513205s (1.865157696s including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-9b6hn to bootstrap-e2e-minion-group-kkkk Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 809.552191ms (809.582919ms including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.883268685s (1.88329395s including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-pfnzl to bootstrap-e2e-master Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 704.215502ms (704.236581ms including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.907263274s (1.90727094s including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-pfnzl Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-9b6hn Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-7wz67 Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-67wn6 Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-rtlfm to bootstrap-e2e-minion-group-z5pf Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.91715389s (3.917163412s including waiting) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.044685713s (3.044692875s including waiting) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-rtlfm Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-rtlfm Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-rxlfn to bootstrap-e2e-minion-group-kkkk Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.400139366s (1.400164876s including waiting) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.081461821s (1.081475923s including waiting) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.5:10250/livez": dial tcp 10.64.1.5:10250: connect: connection refused Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-rxlfn_kube-system(8d8a9473-ef41-4d81-bfa8-74398e51df6c) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-rxlfn_kube-system(8d8a9473-ef41-4d81-bfa8-74398e51df6c) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-rxlfn Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-rxlfn Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-z5pf Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.869950781s (3.869960439s including waiting) Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container volume-snapshot-controller Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container volume-snapshot-controller Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container volume-snapshot-controller Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f68e02f2-35da-4ff2-81fa-ed586b7b84bb) Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container volume-snapshot-controller Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container volume-snapshot-controller Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container volume-snapshot-controller Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f68e02f2-35da-4ff2-81fa-ed586b7b84bb) Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 08:11:40.416 (52ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 08:11:40.416 Jan 29 08:11:40.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 08:11:40.458 (42ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 08:11:40.458 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 08:11:40.458 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 08:11:40.458 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 08:11:40.458 STEP: Collecting events from namespace "reboot-8627". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 08:11:40.458 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 08:11:40.499 Jan 29 08:11:40.539: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 08:11:40.539: INFO: Jan 29 08:11:40.582: INFO: Logging node info for node bootstrap-e2e-master Jan 29 08:11:40.623: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master e2d71906-d1d7-40bb-8ec1-0ff5ab8ca7c0 1973 0 2023-01-29 07:56:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 07:56:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 07:56:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 08:07:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.148.246,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4efc3e501c507bb92c88070968370980,SystemUUID:4efc3e50-1c50-7bb9-2c88-070968370980,BootID:60a7bb4c-1e8b-4a40-b89b-863b85f7960f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:11:40.623: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 08:11:40.670: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 08:11:40.724: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container etcd-container ready: true, restart count 1 Jan 29 08:11:40.724: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container kube-apiserver ready: true, restart count 0 Jan 29 08:11:40.724: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container kube-controller-manager ready: false, restart count 5 Jan 29 08:11:40.724: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container kube-scheduler ready: false, restart count 5 Jan 29 08:11:40.724: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 07:55:51 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container kube-addon-manager ready: true, restart count 4 Jan 29 08:11:40.724: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container etcd-container ready: true, restart count 5 Jan 29 08:11:40.724: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container konnectivity-server-container ready: true, restart count 1 Jan 29 08:11:40.724: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 07:55:51 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container l7-lb-controller ready: false, restart count 6 Jan 29 08:11:40.724: INFO: metadata-proxy-v0.1-pfnzl started at 2023-01-29 07:56:38 +0000 UTC (0+2 container statuses recorded) Jan 29 08:11:40.724: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 08:11:40.724: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 08:11:40.887: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 08:11:40.887: INFO: Logging node info for node bootstrap-e2e-minion-group-kkkk Jan 29 08:11:40.929: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-kkkk 5c1faf37-6a52-4cb6-984b-794e065a9e18 2198 0 2023-01-29 07:56:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-kkkk kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 08:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 08:07:42 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 08:11:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-kkkk,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.132.145,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-kkkk.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-kkkk.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:75ae872aa52dfa1c0bd959ea09034479,SystemUUID:75ae872a-a52d-fa1c-0bd9-59ea09034479,BootID:1bff1b86-47f3-4175-b0a6-8c7f181e8951,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:11:40.930: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-kkkk Jan 29 08:11:40.974: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-kkkk Jan 29 08:11:41.022: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-kkkk: error trying to reach service: dial tcp 10.138.0.3:10250: connect: connection refused Jan 29 08:11:41.022: INFO: Logging node info for node bootstrap-e2e-minion-group-ndwb Jan 29 08:11:41.065: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ndwb a872a196-fba1-4b9d-b495-487aec31cb90 2199 0 2023-01-29 07:56:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ndwb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 08:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 08:07:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 08:11:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-ndwb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:104.199.118.209,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ndwb.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ndwb.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2edc831e1759fe886158939202a48af7,SystemUUID:2edc831e-1759-fe88-6158-939202a48af7,BootID:5d0313ec-818e-4f1a-8e5b-80759c2fb042,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:11:41.066: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ndwb Jan 29 08:11:41.124: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ndwb Jan 29 08:11:41.169: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-ndwb: error trying to reach service: dial tcp 10.138.0.5:10250: connect: connection refused Jan 29 08:11:41.169: INFO: Logging node info for node bootstrap-e2e-minion-group-z5pf Jan 29 08:11:41.211: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-z5pf f552791c-eaf5-4935-98c3-f2eaec044ac7 2197 0 2023-01-29 07:56:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-z5pf kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-29 08:01:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 08:08:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 08:11:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-z5pf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:11:03 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:11:03 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:11:03 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:11:03 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:03 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:03 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:03 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:08:43 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:08:43 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:08:43 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:08:43 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.224.154,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-z5pf.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-z5pf.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f2a59ebb63baf48a2871acc042960ed,SystemUUID:0f2a59eb-b63b-af48-a287-1acc042960ed,BootID:2324a0d3-719c-4a04-9037-128191cc6d71,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:11:41.211: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-z5pf Jan 29 08:11:41.257: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-z5pf Jan 29 08:11:41.308: INFO: coredns-6846b5b5f-xx69z started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:41.308: INFO: Container coredns ready: false, restart count 7 Jan 29 08:11:41.308: INFO: metadata-proxy-v0.1-7wz67 started at 2023-01-29 07:56:24 +0000 UTC (0+2 container statuses recorded) Jan 29 08:11:41.308: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 08:11:41.308: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 08:11:41.308: INFO: konnectivity-agent-dr7js started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:41.308: INFO: Container konnectivity-agent ready: true, restart count 5 Jan 29 08:11:41.308: INFO: kube-proxy-bootstrap-e2e-minion-group-z5pf started at 2023-01-29 07:56:23 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:41.308: INFO: Container kube-proxy ready: true, restart count 5 Jan 29 08:11:41.308: INFO: l7-default-backend-8549d69d99-dr7rr started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:41.308: INFO: Container default-http-backend ready: true, restart count 3 Jan 29 08:11:41.308: INFO: volume-snapshot-controller-0 started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:41.308: INFO: Container volume-snapshot-controller ready: false, restart count 8 Jan 29 08:11:41.308: INFO: kube-dns-autoscaler-5f6455f985-sfpjt started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:41.308: INFO: Container autoscaler ready: false, restart count 5 Jan 29 08:11:41.527: INFO: Latency metrics for node bootstrap-e2e-minion-group-z5pf END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 08:11:41.527 (1.069s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 08:11:41.527 (1.069s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 08:11:41.527 STEP: Destroying namespace "reboot-8627" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 08:11:41.527 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 08:11:41.573 (46ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 08:11:41.573 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 08:11:41.573 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 08:11:40.364from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 08:09:09.329 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 08:09:09.329 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 08:09:09.329 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 08:09:09.33 Jan 29 08:09:09.330: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 08:09:09.331 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 08:09:37.996 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 08:09:38.11 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 08:09:38.19 (28.86s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 08:09:38.19 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 08:09:38.19 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 08:09:38.19 Jan 29 08:09:38.285: INFO: Getting bootstrap-e2e-minion-group-kkkk Jan 29 08:09:38.285: INFO: Getting bootstrap-e2e-minion-group-z5pf Jan 29 08:09:38.285: INFO: Getting bootstrap-e2e-minion-group-ndwb Jan 29 08:09:38.362: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-ndwb condition Ready to be true Jan 29 08:09:38.362: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-z5pf condition Ready to be true Jan 29 08:09:38.362: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-kkkk condition Ready to be true Jan 29 08:09:38.406: INFO: Node bootstrap-e2e-minion-group-ndwb has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:09:38.406: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:09:38.406: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-67wn6" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.406: INFO: Node bootstrap-e2e-minion-group-z5pf has 4 assigned pods with no liveness probes: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt kube-proxy-bootstrap-e2e-minion-group-z5pf metadata-proxy-v0.1-7wz67] Jan 29 08:09:38.406: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt kube-proxy-bootstrap-e2e-minion-group-z5pf metadata-proxy-v0.1-7wz67] Jan 29 08:09:38.406: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-7wz67" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.406: INFO: Node bootstrap-e2e-minion-group-kkkk has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-kkkk metadata-proxy-v0.1-9b6hn] Jan 29 08:09:38.406: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-kkkk metadata-proxy-v0.1-9b6hn] Jan 29 08:09:38.406: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-9b6hn" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.406: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-ndwb" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.407: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.407: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-sfpjt" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.407: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-z5pf" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.407: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-kkkk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 08:09:38.450: INFO: Pod "metadata-proxy-v0.1-67wn6": Phase="Running", Reason="", readiness=true. Elapsed: 43.992347ms Jan 29 08:09:38.450: INFO: Pod "metadata-proxy-v0.1-67wn6" satisfied condition "running and ready, or succeeded" Jan 29 08:09:38.453: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.10167ms Jan 29 08:09:38.453: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:09:38.454: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf": Phase="Running", Reason="", readiness=true. Elapsed: 47.292192ms Jan 29 08:09:38.454: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-z5pf" satisfied condition "running and ready, or succeeded" Jan 29 08:09:38.454: INFO: Pod "kube-dns-autoscaler-5f6455f985-sfpjt": Phase="Running", Reason="", readiness=true. Elapsed: 47.456844ms Jan 29 08:09:38.454: INFO: Pod "kube-dns-autoscaler-5f6455f985-sfpjt" satisfied condition "running and ready, or succeeded" Jan 29 08:09:38.454: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kkkk": Phase="Running", Reason="", readiness=true. Elapsed: 47.323382ms Jan 29 08:09:38.454: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-kkkk" satisfied condition "running and ready, or succeeded" Jan 29 08:09:38.454: INFO: Pod "metadata-proxy-v0.1-7wz67": Phase="Running", Reason="", readiness=true. Elapsed: 47.969144ms Jan 29 08:09:38.454: INFO: Pod "metadata-proxy-v0.1-7wz67" satisfied condition "running and ready, or succeeded" Jan 29 08:09:38.454: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ndwb": Phase="Running", Reason="", readiness=true. Elapsed: 47.815947ms Jan 29 08:09:38.454: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ndwb" satisfied condition "running and ready, or succeeded" Jan 29 08:09:38.454: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-ndwb metadata-proxy-v0.1-67wn6] Jan 29 08:09:38.454: INFO: Getting external IP address for bootstrap-e2e-minion-group-ndwb Jan 29 08:09:38.454: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-ndwb(104.199.118.209:22) Jan 29 08:09:38.455: INFO: Pod "metadata-proxy-v0.1-9b6hn": Phase="Running", Reason="", readiness=true. Elapsed: 48.281199ms Jan 29 08:09:38.455: INFO: Pod "metadata-proxy-v0.1-9b6hn" satisfied condition "running and ready, or succeeded" Jan 29 08:09:38.455: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-kkkk metadata-proxy-v0.1-9b6hn] Jan 29 08:09:38.455: INFO: Getting external IP address for bootstrap-e2e-minion-group-kkkk Jan 29 08:09:38.455: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-kkkk(34.168.132.145:22) Jan 29 08:09:38.972: INFO: ssh prow@104.199.118.209:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 08:09:38.972: INFO: ssh prow@104.199.118.209:22: stdout: "" Jan 29 08:09:38.972: INFO: ssh prow@104.199.118.209:22: stderr: "" Jan 29 08:09:38.972: INFO: ssh prow@104.199.118.209:22: exit code: 0 Jan 29 08:09:38.972: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-ndwb condition Ready to be false Jan 29 08:09:38.980: INFO: ssh prow@34.168.132.145:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 08:09:38.980: INFO: ssh prow@34.168.132.145:22: stdout: "" Jan 29 08:09:38.980: INFO: ssh prow@34.168.132.145:22: stderr: "" Jan 29 08:09:38.980: INFO: ssh prow@34.168.132.145:22: exit code: 0 Jan 29 08:09:38.980: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-kkkk condition Ready to be false Jan 29 08:09:39.014: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:09:39.022: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:09:40.495: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.088818909s Jan 29 08:09:40.495: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:09:41.057: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:09:41.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:09:42.494: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.087919434s Jan 29 08:09:42.495: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:09:43.102: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:09:43.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:09:44.530: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.123961612s Jan 29 08:09:44.531: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-z5pf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 08:07:15 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 07:56:37 +0000 UTC }] Jan 29 08:09:45.149: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:09:45.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:10:46.496: INFO: Encountered non-retryable error while getting pod kube-system/volume-snapshot-controller-0: Get "https://34.168.148.246/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0": stream error: stream ID 1919; INTERNAL_ERROR; received from peer Jan 29 08:10:46.496: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 29 08:10:46.496: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-sfpjt kube-proxy-bootstrap-e2e-minion-group-z5pf metadata-proxy-v0.1-7wz67] Jan 29 08:10:46.496: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 07:56:37 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:07:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:07:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 07:56:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.2.29 PodIPs:[{IP:10.64.2.29}] StartTime:2023-01-29 07:56:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 2m40s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(f68e02f2-35da-4ff2-81fa-ed586b7b84bb),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:06:01 +0000 UTC,FinishedAt:2023-01-29 08:07:15 +0000 UTC,ContainerID:containerd://0de4853acda0dbb798fdd22f658c13f02a1f2a071ca83f67365af60295846370,}} Ready:false RestartCount:7 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://0de4853acda0dbb798fdd22f658c13f02a1f2a071ca83f67365af60295846370 Started:0xc00253042f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 29 08:10:47.191: INFO: Couldn't get node bootstrap-e2e-minion-group-ndwb Jan 29 08:10:47.197: INFO: Couldn't get node bootstrap-e2e-minion-group-kkkk Jan 29 08:11:32.233: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:32.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:34.276: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:34.278: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:36.319: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:36.321: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:37.828: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: I0129 08:09:56.556219 1 main.go:125] Version: v6.1.0 I0129 08:09:56.557595 1 main.go:168] Metrics path successfully registered at /metrics I0129 08:09:56.557804 1 main.go:174] Start NewCSISnapshotController with kubeconfig [] resyncPeriod [15m0s] E0129 08:10:56.567012 1 main.go:86] Failed to list v1 volumesnapshots with error=Get "https://10.0.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshots": stream error: stream ID 1; INTERNAL_ERROR; received from peer Jan 29 08:11:37.828: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller: I0129 08:09:56.556219 1 main.go:125] Version: v6.1.0 I0129 08:09:56.557595 1 main.go:168] Metrics path successfully registered at /metrics I0129 08:09:56.557804 1 main.go:174] Start NewCSISnapshotController with kubeconfig [] resyncPeriod [15m0s] E0129 08:10:56.567012 1 main.go:86] Failed to list v1 volumesnapshots with error=Get "https://10.0.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshots": stream error: stream ID 1; INTERNAL_ERROR; received from peer Jan 29 08:11:38.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-kkkk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:38.363: INFO: Condition Ready of node bootstrap-e2e-minion-group-ndwb is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 08:11:40.361: INFO: Node bootstrap-e2e-minion-group-kkkk didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:11:40.363: INFO: Node bootstrap-e2e-minion-group-ndwb didn't reach desired Ready condition status (false) within 2m0s Jan 29 08:11:40.363: INFO: Node bootstrap-e2e-minion-group-kkkk failed reboot test. Jan 29 08:11:40.363: INFO: Node bootstrap-e2e-minion-group-ndwb failed reboot test. Jan 29 08:11:40.363: INFO: Node bootstrap-e2e-minion-group-z5pf failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 08:11:40.364 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 08:11:40.364 (2m2.174s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 08:11:40.364 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 08:11:40.364 Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-mxv6m to bootstrap-e2e-minion-group-ndwb Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.022409416s (1.022418953s including waiting) Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container coredns Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container coredns Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container coredns Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container coredns Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.414: INFO: event for coredns-6846b5b5f-mxv6m: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-mxv6m Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Readiness probe failed: Get "http://10.64.3.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-mxv6m: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Container coredns failed liveness probe, will be restarted Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-xx69z to bootstrap-e2e-minion-group-z5pf Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.332873541s (3.332885491s including waiting) Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container coredns Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container coredns Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container coredns Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.6:8181/ready": dial tcp 10.64.2.6:8181: connect: connection refused Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.9:8181/ready": dial tcp 10.64.2.9:8181: connect: connection refused Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-xx69z_kube-system(25c9d77e-fa01-4def-bbd4-fecdd567d047) Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container coredns Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container coredns Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container coredns Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-xx69z_kube-system(25c9d77e-fa01-4def-bbd4-fecdd567d047) Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.21:8181/ready": dial tcp 10.64.2.21:8181: connect: connection refused Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f-xx69z: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: Get "http://10.64.2.24:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-xx69z Jan 29 08:11:40.415: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-mxv6m Jan 29 08:11:40.415: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 08:11:40.415: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 08:11:40.415: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 08:11:40.415: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 08:11:40.415: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 08:11:40.415: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 08:11:40.415: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 08:11:40.415: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 08:11:40.415: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 08:11:40.415: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 08:11:40.415: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 08:11:40.415: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3abc7 became leader Jan 29 08:11:40.415: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_f409 became leader Jan 29 08:11:40.415: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_38576 became leader Jan 29 08:11:40.415: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3d7b3 became leader Jan 29 08:11:40.415: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_15ee3 became leader Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-5fbzh to bootstrap-e2e-minion-group-kkkk Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 676.046267ms (676.059705ms including waiting) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "http://10.64.1.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-5fbzh_kube-system(9571086c-623c-41c0-955d-d460a6dd0ed2) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "http://10.64.1.10:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-5fbzh: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-dr7js to bootstrap-e2e-minion-group-z5pf Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.980633764s (1.980644127s including waiting) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-dr7js_kube-system(e1a4e00e-3934-4848-9a66-be9d8c0b101f) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Liveness probe failed: Get "http://10.64.2.25:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-dr7js: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rnjhw to bootstrap-e2e-minion-group-ndwb Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 637.095564ms (637.1052ms including waiting) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container konnectivity-agent Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Failed: Error: failed to get sandbox container task: no running task found: task 4ef63a8d4502cb0295416ca4a4f1b807b6a0f2f7059b915d805f859c9f3445b5 not found: not found Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rnjhw_kube-system(4360ba31-7846-46f7-8c84-29877a07a656) Jan 29 08:11:40.415: INFO: event for konnectivity-agent-rnjhw: {kubelet bootstrap-e2e-minion-group-ndwb} Unhealthy: Liveness probe failed: Get "http://10.64.3.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-dr7js Jan 29 08:11:40.415: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-5fbzh Jan 29 08:11:40.415: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rnjhw Jan 29 08:11:40.415: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 08:11:40.415: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 08:11:40.415: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 08:11:40.415: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 08:11:40.415: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 08:11:40.415: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 08:11:40.415: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 08:11:40.415: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 08:11:40.415: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 08:11:40.415: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:11:40.415: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 08:11:40.415: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 08:11:40.415: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 08:11:40.415: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 08:11:40.415: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c7e426b9-38fc-4c7f-b4fc-f070398d9e0e became leader Jan 29 08:11:40.415: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_2b2c293c-76ee-41be-8eb8-f980d4fa01a1 became leader Jan 29 08:11:40.415: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a720fece-9ceb-41c3-8abf-b82f0fc29f13 became leader Jan 29 08:11:40.415: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_dc6aabca-419e-4b82-881a-a69a55bcf97f became leader Jan 29 08:11:40.415: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_9109c6d5-4ebf-4db3-a5e7-2869de648c91 became leader Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-sfpjt to bootstrap-e2e-minion-group-z5pf Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.278049196s (3.278058964s including waiting) Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container autoscaler Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container autoscaler Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container autoscaler Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container autoscaler Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container autoscaler Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container autoscaler Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985-sfpjt: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-sfpjt_kube-system(19102d18-f113-4479-a30b-b5e1ffe4f405) Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-sfpjt Jan 29 08:11:40.415: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-kkkk: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-kkkk_kube-system(4519601567f1523d5567ec952650e112) Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ndwb_kube-system(2d3313b36191cd5f359e56c9a4140294) Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} Killing: Stopping container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ndwb: {kubelet bootstrap-e2e-minion-group-ndwb} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ndwb_kube-system(2d3313b36191cd5f359e56c9a4140294) Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container kube-proxy Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.415: INFO: event for kube-proxy-bootstrap-e2e-minion-group-z5pf: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-z5pf_kube-system(d25d661a11fddc5eb34e96f57ad37366) Jan 29 08:11:40.415: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 08:11:40.415: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 08:11:40.415: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 08:11:40.415: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 08:11:40.415: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 29 08:11:40.415: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 08:11:40.415: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_fc0b0a85-41c4-4dec-ac86-abf3fce22b5a became leader Jan 29 08:11:40.415: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_990125e5-6222-4b04-8d02-6b89ac6a4c2c became leader Jan 29 08:11:40.415: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_210dbef6-31de-436a-bc0b-7ce6daa2453a became leader Jan 29 08:11:40.415: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_813596ae-d86e-4698-ab4f-55e59d099d5a became leader Jan 29 08:11:40.415: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_21a597fd-2489-494d-8ed4-c939ab76f470 became leader Jan 29 08:11:40.415: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_d4386016-19d3-4013-9169-36494a5b7e73 became leader Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-dr7rr to bootstrap-e2e-minion-group-z5pf Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.685937785s (1.685947189s including waiting) Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container default-http-backend Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container default-http-backend Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container default-http-backend Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container default-http-backend Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Liveness probe failed: Get "http://10.64.2.16:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99-dr7rr: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 08:11:40.415: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-dr7rr Jan 29 08:11:40.415: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 08:11:40.415: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 08:11:40.415: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 08:11:40.415: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 08:11:40.415: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 08:11:40.415: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 08:11:40.415: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-67wn6 to bootstrap-e2e-minion-group-ndwb Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 833.493822ms (833.533685ms including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.083002915s (2.083052486s including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Created: Created container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} Started: Started container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-67wn6: {kubelet bootstrap-e2e-minion-group-ndwb} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-7wz67 to bootstrap-e2e-minion-group-z5pf Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 755.258946ms (755.278068ms including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.86513205s (1.865157696s including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-7wz67: {kubelet bootstrap-e2e-minion-group-z5pf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-9b6hn to bootstrap-e2e-minion-group-kkkk Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 809.552191ms (809.582919ms including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.883268685s (1.88329395s including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-9b6hn: {kubelet bootstrap-e2e-minion-group-kkkk} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-pfnzl to bootstrap-e2e-master Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 704.215502ms (704.236581ms including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.907263274s (1.90727094s including waiting) Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1-pfnzl: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-pfnzl Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-9b6hn Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-7wz67 Jan 29 08:11:40.415: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-67wn6 Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-rtlfm to bootstrap-e2e-minion-group-z5pf Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.91715389s (3.917163412s including waiting) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.044685713s (3.044692875s including waiting) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c-rtlfm: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-rtlfm Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-rtlfm Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-rxlfn to bootstrap-e2e-minion-group-kkkk Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.400139366s (1.400164876s including waiting) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.081461821s (1.081475923s including waiting) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Created: Created container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Started: Started container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: Get "https://10.64.1.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Killing: Stopping container metrics-server-nanny Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} Unhealthy: Liveness probe failed: Get "https://10.64.1.5:10250/livez": dial tcp 10.64.1.5:10250: connect: connection refused Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-rxlfn_kube-system(8d8a9473-ef41-4d81-bfa8-74398e51df6c) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {kubelet bootstrap-e2e-minion-group-kkkk} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-rxlfn_kube-system(8d8a9473-ef41-4d81-bfa8-74398e51df6c) Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9-rxlfn: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-rxlfn Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-rxlfn Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 08:11:40.415: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-z5pf Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.869950781s (3.869960439s including waiting) Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container volume-snapshot-controller Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container volume-snapshot-controller Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container volume-snapshot-controller Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f68e02f2-35da-4ff2-81fa-ed586b7b84bb) Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Created: Created container volume-snapshot-controller Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Started: Started container volume-snapshot-controller Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} Killing: Stopping container volume-snapshot-controller Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-z5pf} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f68e02f2-35da-4ff2-81fa-ed586b7b84bb) Jan 29 08:11:40.415: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 08:11:40.416 (52ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 08:11:40.416 Jan 29 08:11:40.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 08:11:40.458 (42ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 08:11:40.458 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 08:11:40.458 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 08:11:40.458 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 08:11:40.458 STEP: Collecting events from namespace "reboot-8627". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 08:11:40.458 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 08:11:40.499 Jan 29 08:11:40.539: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 08:11:40.539: INFO: Jan 29 08:11:40.582: INFO: Logging node info for node bootstrap-e2e-master Jan 29 08:11:40.623: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master e2d71906-d1d7-40bb-8ec1-0ff5ab8ca7c0 1973 0 2023-01-29 07:56:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 07:56:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 07:56:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 08:07:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:07:41 +0000 UTC,LastTransitionTime:2023-01-29 07:56:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.148.246,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4efc3e501c507bb92c88070968370980,SystemUUID:4efc3e50-1c50-7bb9-2c88-070968370980,BootID:60a7bb4c-1e8b-4a40-b89b-863b85f7960f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:11:40.623: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 08:11:40.670: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 08:11:40.724: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container etcd-container ready: true, restart count 1 Jan 29 08:11:40.724: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container kube-apiserver ready: true, restart count 0 Jan 29 08:11:40.724: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container kube-controller-manager ready: false, restart count 5 Jan 29 08:11:40.724: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container kube-scheduler ready: false, restart count 5 Jan 29 08:11:40.724: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 07:55:51 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container kube-addon-manager ready: true, restart count 4 Jan 29 08:11:40.724: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container etcd-container ready: true, restart count 5 Jan 29 08:11:40.724: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 07:55:34 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container konnectivity-server-container ready: true, restart count 1 Jan 29 08:11:40.724: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 07:55:51 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:40.724: INFO: Container l7-lb-controller ready: false, restart count 6 Jan 29 08:11:40.724: INFO: metadata-proxy-v0.1-pfnzl started at 2023-01-29 07:56:38 +0000 UTC (0+2 container statuses recorded) Jan 29 08:11:40.724: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 08:11:40.724: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 08:11:40.887: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 08:11:40.887: INFO: Logging node info for node bootstrap-e2e-minion-group-kkkk Jan 29 08:11:40.929: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-kkkk 5c1faf37-6a52-4cb6-984b-794e065a9e18 2198 0 2023-01-29 07:56:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-kkkk kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 08:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 08:07:42 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 08:11:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-kkkk,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.132.145,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-kkkk.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-kkkk.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:75ae872aa52dfa1c0bd959ea09034479,SystemUUID:75ae872a-a52d-fa1c-0bd9-59ea09034479,BootID:1bff1b86-47f3-4175-b0a6-8c7f181e8951,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:11:40.930: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-kkkk Jan 29 08:11:40.974: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-kkkk Jan 29 08:11:41.022: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-kkkk: error trying to reach service: dial tcp 10.138.0.3:10250: connect: connection refused Jan 29 08:11:41.022: INFO: Logging node info for node bootstrap-e2e-minion-group-ndwb Jan 29 08:11:41.065: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ndwb a872a196-fba1-4b9d-b495-487aec31cb90 2199 0 2023-01-29 07:56:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ndwb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 08:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 08:07:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 08:11:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-ndwb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:33 +0000 UTC,LastTransitionTime:2023-01-29 08:11:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:07:42 +0000 UTC,LastTransitionTime:2023-01-29 08:02:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:104.199.118.209,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ndwb.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ndwb.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2edc831e1759fe886158939202a48af7,SystemUUID:2edc831e-1759-fe88-6158-939202a48af7,BootID:5d0313ec-818e-4f1a-8e5b-80759c2fb042,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:11:41.066: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ndwb Jan 29 08:11:41.124: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ndwb Jan 29 08:11:41.169: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-ndwb: error trying to reach service: dial tcp 10.138.0.5:10250: connect: connection refused Jan 29 08:11:41.169: INFO: Logging node info for node bootstrap-e2e-minion-group-z5pf Jan 29 08:11:41.211: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-z5pf f552791c-eaf5-4935-98c3-f2eaec044ac7 2197 0 2023-01-29 07:56:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-z5pf kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 07:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 08:01:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-29 08:01:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 08:08:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 08:11:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ubuntu-slow/us-west1-b/bootstrap-e2e-minion-group-z5pf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 08:11:03 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 08:11:03 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 08:11:03 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 08:11:03 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:03 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:03 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 08:11:03 +0000 UTC,LastTransitionTime:2023-01-29 07:59:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 07:56:37 +0000 UTC,LastTransitionTime:2023-01-29 07:56:37 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 08:08:43 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 08:08:43 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 08:08:43 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 08:08:43 +0000 UTC,LastTransitionTime:2023-01-29 08:03:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.224.154,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-z5pf.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-z5pf.c.k8s-jkns-e2e-gce-ubuntu-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f2a59ebb63baf48a2871acc042960ed,SystemUUID:0f2a59eb-b63b-af48-a287-1acc042960ed,BootID:2324a0d3-719c-4a04-9037-128191cc6d71,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 08:11:41.211: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-z5pf Jan 29 08:11:41.257: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-z5pf Jan 29 08:11:41.308: INFO: coredns-6846b5b5f-xx69z started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:41.308: INFO: Container coredns ready: false, restart count 7 Jan 29 08:11:41.308: INFO: metadata-proxy-v0.1-7wz67 started at 2023-01-29 07:56:24 +0000 UTC (0+2 container statuses recorded) Jan 29 08:11:41.308: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 08:11:41.308: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 08:11:41.308: INFO: konnectivity-agent-dr7js started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:41.308: INFO: Container konnectivity-agent ready: true, restart count 5 Jan 29 08:11:41.308: INFO: kube-proxy-bootstrap-e2e-minion-group-z5pf started at 2023-01-29 07:56:23 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:41.308: INFO: Container kube-proxy ready: true, restart count 5 Jan 29 08:11:41.308: INFO: l7-default-backend-8549d69d99-dr7rr started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:41.308: INFO: Container default-http-backend ready: true, restart count 3 Jan 29 08:11:41.308: INFO: volume-snapshot-controller-0 started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:41.308: INFO: Container volume-snapshot-controller ready: false, restart count 8 Jan 29 08:11:41.308: INFO: kube-dns-autoscaler-5f6455f985-sfpjt started at 2023-01-29 07:56:37 +0000 UTC (0+1 container statuses recorded) Jan 29 08:11:41.308: INFO: Container autoscaler ready: false, restart count 5 Jan 29 08:11:41.527: INFO: Latency metrics for node bootstrap-e2e-minion-group-z5pf END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 08:11:41.527 (1.069s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 08:11:41.527 (1.069s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 08:11:41.527 STEP: Destroying namespace "reboot-8627" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 08:11:41.527 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 08:11:41.573 (46ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 08:11:41.573 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 08:11:41.573 (0s)
Filter through log files
error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Feature:Reboot\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
from junit_runner.xml
Filter through log files
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e JUnit report
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e suite report
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e suite report
Kubernetes e2e suite [ReportBeforeSuite]
Kubernetes e2e suite [ReportBeforeSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest Extract
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest diffResources
kubetest kubectl version
kubetest list nodes
kubetest listResources After
kubetest listResources Before
kubetest listResources Down
kubetest listResources Up
kubetest test setup
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains a x-kubernetes-validations rule that refers to a property that do not exist
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains a x-kubernetes-validations rule that refers to a property that do not exist
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that contains a syntax error
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that contains a syntax error
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that exceeds the estimated cost limit
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that exceeds the estimated cost limit
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource that exceeds the runtime cost limit for x-kubernetes-validations rule execution
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource that exceeds the runtime cost limit for x-kubernetes-validations rule execution
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail update of a custom resource that does not satisfy a x-kubernetes-validations transition rule
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail update of a custom resource that does not satisfy a x-kubernetes-validations transition rule
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [It] [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply an invalid CR with extra properties for CRD with validation schema
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply an invalid CR with extra properties for CRD with validation schema
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect duplicates in a CR when preserving unknown fields
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect duplicates in a CR when preserving unknown fields
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown and duplicate fields of a typed object
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown and duplicate fields of a typed object
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply a finalizer to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply a finalizer to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should apply changes to a resourcequota status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should apply changes to a resourcequota status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [It] [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [It] [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [It] [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [It] [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [It] [sig-api-machinery] kube-apiserver identity [Feature:APIServerIdentity] kube-apiserver identity should persist after restart [Disruptive]
Kubernetes e2e suite [It] [sig-api-machinery] kube-apiserver identity [Feature:APIServerIdentity] kube-apiserver identity should persist after restart [Disruptive]
Kubernetes e2e suite [It] [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [It] [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]
Kubernetes e2e suite [It] [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [It] [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [It] [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [It] [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [It] [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [It] [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [It] [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support timezone
Kubernetes e2e suite [It] [sig-apps] CronJob should support timezone
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [It] [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [It] [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [It] [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [It] [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [It] [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [It] [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [It] [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [It] [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [It] [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore DisruptionTarget condition
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore DisruptionTarget condition
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore exit code 137
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore exit code 137
Kubernetes e2e suite [It] [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy on exit code to fail the job early
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy on exit code to fail the job early
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy to not count the failure towards the backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy to not count the failure towards the backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [It] [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [It] [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [It] [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [It] [sig-apps] Job should manage the lifecycle of a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should manage the lifecycle of a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [It] [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [It] [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [It] [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should get and update a ReplicationController scale [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should get and update a ReplicationController scale [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [It] [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [It] [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [It] [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [It] [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [It] [sig-auth] SelfSubjectReview [Feature:APISelfSubjectReview] should support SelfSubjectReview API operations
Kubernetes e2e suite [It] [sig-auth] SelfSubjectReview [Feature:APISelfSubjectReview] should support SelfSubjectReview API operations
Kubernetes e2e suite [It] [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should update a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should update a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] SubjectReview should support SubjectReview API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] SubjectReview should support SubjectReview API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) CustomResourceDefinition Should scale with a CRD targetRef
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) CustomResourceDefinition Should scale with a CRD targetRef
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light [Slow] Should scale from 2 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light [Slow] Should scale from 2 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods on a busy application with an idle sidecar container
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods on a busy application with an idle sidecar container
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and then from 3 pods to