go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 23:26:42.834from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 23:19:23.936 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 23:19:23.936 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 23:19:23.936 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 23:19:23.936 Jan 29 23:19:23.936: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 23:19:23.938 Jan 29 23:19:23.977: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:26.017: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:28.017: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:30.017: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:32.018: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:34.018: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:36.018: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:38.018: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:40.018: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:42.018: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:44.017: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:46.017: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:48.017: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:50.017: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:52.018: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:54.019: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 23:20:40.519 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 23:20:40.654 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 23:20:40.77 (1m16.833s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 23:20:40.77 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 23:20:40.77 (0s) > Enter [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 23:20:40.77 Jan 29 23:20:40.953: INFO: Getting bootstrap-e2e-minion-group-88l0 Jan 29 23:20:40.954: INFO: Getting bootstrap-e2e-minion-group-6721 Jan 29 23:20:40.954: INFO: Getting bootstrap-e2e-minion-group-wqbh Jan 29 23:20:41.001: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-wqbh condition Ready to be true Jan 29 23:20:41.002: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-88l0 condition Ready to be true Jan 29 23:20:41.002: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6721 condition Ready to be true Jan 29 23:20:41.047: INFO: Node bootstrap-e2e-minion-group-wqbh has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-wqbh metadata-proxy-v0.1-f9lnv] Jan 29 23:20:41.047: INFO: Node bootstrap-e2e-minion-group-6721 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-6721 metadata-proxy-v0.1-qj6hk] Jan 29 23:20:41.047: INFO: Node bootstrap-e2e-minion-group-88l0 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-fnk2j kube-proxy-bootstrap-e2e-minion-group-88l0 metadata-proxy-v0.1-2vqlc volume-snapshot-controller-0] Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-wqbh metadata-proxy-v0.1-f9lnv] Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-fnk2j kube-proxy-bootstrap-e2e-minion-group-88l0 metadata-proxy-v0.1-2vqlc volume-snapshot-controller-0] Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-f9lnv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-6721 metadata-proxy-v0.1-qj6hk] Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-qj6hk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-wqbh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6721" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-88l0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-fnk2j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2vqlc" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.095: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.218796ms Jan 29 23:20:41.095: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:41.095: INFO: Pod "metadata-proxy-v0.1-qj6hk": Phase="Running", Reason="", readiness=true. Elapsed: 48.286591ms Jan 29 23:20:41.095: INFO: Pod "metadata-proxy-v0.1-qj6hk" satisfied condition "running and ready, or succeeded" Jan 29 23:20:41.098: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 50.595555ms Jan 29 23:20:41.098: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:41.099: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6721": Phase="Running", Reason="", readiness=true. Elapsed: 51.769605ms Jan 29 23:20:41.099: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6721" satisfied condition "running and ready, or succeeded" Jan 29 23:20:41.099: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-6721 metadata-proxy-v0.1-qj6hk] Jan 29 23:20:41.099: INFO: Getting external IP address for bootstrap-e2e-minion-group-6721 Jan 29 23:20:41.099: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-6721(35.197.20.238:22) Jan 29 23:20:41.099: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wqbh": Phase="Running", Reason="", readiness=true. Elapsed: 52.209489ms Jan 29 23:20:41.099: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wqbh" satisfied condition "running and ready, or succeeded" Jan 29 23:20:41.099: INFO: Pod "metadata-proxy-v0.1-2vqlc": Phase="Running", Reason="", readiness=true. Elapsed: 52.186892ms Jan 29 23:20:41.100: INFO: Pod "metadata-proxy-v0.1-2vqlc" satisfied condition "running and ready, or succeeded" Jan 29 23:20:41.100: INFO: Pod "metadata-proxy-v0.1-f9lnv": Phase="Running", Reason="", readiness=true. Elapsed: 52.426998ms Jan 29 23:20:41.100: INFO: Pod "metadata-proxy-v0.1-f9lnv" satisfied condition "running and ready, or succeeded" Jan 29 23:20:41.100: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-wqbh metadata-proxy-v0.1-f9lnv] Jan 29 23:20:41.100: INFO: Getting external IP address for bootstrap-e2e-minion-group-wqbh Jan 29 23:20:41.100: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-wqbh(35.185.219.215:22) Jan 29 23:20:41.100: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-88l0": Phase="Running", Reason="", readiness=true. Elapsed: 52.582703ms Jan 29 23:20:41.100: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-88l0" satisfied condition "running and ready, or succeeded" Jan 29 23:20:41.633: INFO: ssh prow@35.185.219.215:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 23:20:41.633: INFO: ssh prow@35.185.219.215:22: stdout: "" Jan 29 23:20:41.633: INFO: ssh prow@35.185.219.215:22: stderr: "" Jan 29 23:20:41.633: INFO: ssh prow@35.185.219.215:22: exit code: 0 Jan 29 23:20:41.633: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-wqbh condition Ready to be false Jan 29 23:20:41.635: INFO: ssh prow@35.197.20.238:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 23:20:41.635: INFO: ssh prow@35.197.20.238:22: stdout: "" Jan 29 23:20:41.635: INFO: ssh prow@35.197.20.238:22: stderr: "" Jan 29 23:20:41.635: INFO: ssh prow@35.197.20.238:22: exit code: 0 Jan 29 23:20:41.635: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6721 condition Ready to be false Jan 29 23:20:41.676: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:41.679: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:43.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.090315654s Jan 29 23:20:43.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:43.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092927572s Jan 29 23:20:43.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:43.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:43.721: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:45.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.091035946s Jan 29 23:20:45.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:45.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092220674s Jan 29 23:20:45.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:45.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:45.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:47.142: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.094768431s Jan 29 23:20:47.142: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:47.147: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099674828s Jan 29 23:20:47.147: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:47.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:47.833: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:49.143: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.095777827s Jan 29 23:20:49.143: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:49.145: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098132214s Jan 29 23:20:49.145: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:49.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:49.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:51.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.091980672s Jan 29 23:20:51.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:51.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093873781s Jan 29 23:20:51.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:51.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:51.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:53.142: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.094584997s Jan 29 23:20:53.142: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:53.142: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 12.094908429s Jan 29 23:20:53.142: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:54.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:54.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:55.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.091028414s Jan 29 23:20:55.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:55.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 14.092515902s Jan 29 23:20:55.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:56.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:56.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:57.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.091631604s Jan 29 23:20:57.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:57.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 16.094002001s Jan 29 23:20:57.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:58.096: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:58.096: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:59.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.09071481s Jan 29 23:20:59.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:59.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 18.091957984s Jan 29 23:20:59.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:00.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:00.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:01.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.090754497s Jan 29 23:21:01.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:01.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 20.092901148s Jan 29 23:21:01.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:02.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:02.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:03.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.091538054s Jan 29 23:21:03.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:03.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 22.093620678s Jan 29 23:21:03.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:04.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:04.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:05.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.090206873s Jan 29 23:21:05.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:05.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 24.092404398s Jan 29 23:21:05.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:06.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:06.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:07.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.09149144s Jan 29 23:21:07.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:07.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 26.092889672s Jan 29 23:21:07.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:08.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:08.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:09.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.090979107s Jan 29 23:21:09.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:09.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 28.092330612s Jan 29 23:21:09.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:10.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:10.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:11.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.090635217s Jan 29 23:21:11.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:11.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 30.092928386s Jan 29 23:21:11.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:12.414: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:12.414: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:13.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.09132865s Jan 29 23:21:13.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:13.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 32.093364392s Jan 29 23:21:13.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:14.459: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:14.459: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:15.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.090683987s Jan 29 23:21:15.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:15.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 34.092010001s Jan 29 23:21:15.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:16.503: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:16.503: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:17.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.090851234s Jan 29 23:21:17.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:17.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 36.093340064s Jan 29 23:21:17.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:18.547: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:18.547: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:19.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.090517276s Jan 29 23:21:19.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:19.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 38.092387334s Jan 29 23:21:19.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:20.591: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:20.592: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:21.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.090893642s Jan 29 23:21:21.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:21.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 40.092445454s Jan 29 23:21:21.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:22.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:22.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:23.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.091035617s Jan 29 23:21:23.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:23.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 42.092317294s Jan 29 23:21:23.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:24.681: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:24.681: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:25.187: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 44.14s Jan 29 23:21:25.187: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:25.188: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.141187937s Jan 29 23:21:25.188: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:26.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:26.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:27.140: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.092395681s Jan 29 23:21:27.140: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:27.142: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 46.094971004s Jan 29 23:21:27.142: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:28.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:28.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:29.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.090270202s Jan 29 23:21:29.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:29.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 48.092514874s Jan 29 23:21:29.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:30.816: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:30.816: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:31.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.090179348s Jan 29 23:21:31.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:31.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 50.092723582s Jan 29 23:21:31.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:32.860: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:32.860: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:33.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.091154748s Jan 29 23:21:33.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:33.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 52.092224958s Jan 29 23:21:33.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:34.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:34.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:35.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.091528766s Jan 29 23:21:35.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:35.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 54.0930205s Jan 29 23:21:35.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:36.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:36.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:37.140: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.09277479s Jan 29 23:21:37.140: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:37.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 56.093818188s Jan 29 23:21:37.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:38.992: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:38.992: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:39.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.09062543s Jan 29 23:21:39.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:39.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 58.092037294s Jan 29 23:21:39.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:41.036: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:41.036: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:41.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.091456492s Jan 29 23:21:41.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:41.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.093107436s Jan 29 23:21:41.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:43.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:43.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:43.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.091026662s Jan 29 23:21:43.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:43.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.092551607s Jan 29 23:21:43.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:45.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:45.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:45.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.090461407s Jan 29 23:21:45.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:45.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.092075271s Jan 29 23:21:45.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:47.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.093602456s Jan 29 23:21:47.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:47.142: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.094828447s Jan 29 23:21:47.142: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:47.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:47.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:49.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.091687817s Jan 29 23:21:49.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:49.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.092537219s Jan 29 23:21:49.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:49.214: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:49.214: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:51.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.090597022s Jan 29 23:21:51.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:51.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.091896241s Jan 29 23:21:51.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:51.259: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:51.259: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:53.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.090331869s Jan 29 23:21:53.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:53.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.092710767s Jan 29 23:21:53.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:53.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:53.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:55.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.090816069s Jan 29 23:21:55.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:55.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.092279511s Jan 29 23:21:55.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:55.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:55.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:57.142: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.094616961s Jan 29 23:21:57.142: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.09449305s Jan 29 23:21:57.142: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:57.142: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:57.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:57.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:59.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.091364146s Jan 29 23:21:59.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:59.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.092721654s Jan 29 23:21:59.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:59.440: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:59.440: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:01.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.091640966s Jan 29 23:22:01.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:01.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.09260959s Jan 29 23:22:01.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:01.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:01.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:03.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.091445228s Jan 29 23:22:03.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:03.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.092819528s Jan 29 23:22:03.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:03.529: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:03.529: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:05.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.091260988s Jan 29 23:22:05.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:05.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.092616172s Jan 29 23:22:05.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:05.573: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:05.573: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:07.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.09196965s Jan 29 23:22:07.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:07.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.094025824s Jan 29 23:22:07.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:07.618: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:07.618: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:09.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.090387395s Jan 29 23:22:09.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:09.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.092157966s Jan 29 23:22:09.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:09.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:09.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:11.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.091432957s Jan 29 23:22:11.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:11.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.093091896s Jan 29 23:22:11.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:11.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:11.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:13.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.09007788s Jan 29 23:22:13.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:13.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.093040244s Jan 29 23:22:13.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:13.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:13.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:15.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.090393697s Jan 29 23:22:15.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:15.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.092510529s Jan 29 23:22:15.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:15.794: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:15.794: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:17.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.090897654s Jan 29 23:22:17.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:17.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.093524143s Jan 29 23:22:17.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:17.838: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:17.838: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:19.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.090860704s Jan 29 23:22:19.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:19.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.092047873s Jan 29 23:22:19.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:19.881: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:19.881: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:21.142: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.095362817s Jan 29 23:22:21.143: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:21.143: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.095841581s Jan 29 23:22:21.143: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:21.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:21.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:23.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.090284073s Jan 29 23:22:23.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:23.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.092369887s Jan 29 23:22:23.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:23.969: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:23.969: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:25.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.090117426s Jan 29 23:22:25.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:25.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.092581254s Jan 29 23:22:25.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:26.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:26.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:27.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.091855693s Jan 29 23:22:27.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:27.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.094174742s Jan 29 23:22:27.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:28.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:28.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:29.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.09041197s Jan 29 23:22:29.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:29.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.092421129s Jan 29 23:22:29.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:30.106: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:30.106: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:31.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.090489788s Jan 29 23:22:31.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:31.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.092797362s Jan 29 23:22:31.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:32.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:32.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:33.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.091628486s Jan 29 23:22:33.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:33.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.093245231s Jan 29 23:22:33.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:34.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:34.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:35.141: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.094017389s Jan 29 23:22:35.141: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:35.143: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.095633111s Jan 29 23:22:35.143: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:36.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:36.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:37.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.090635951s Jan 29 23:22:37.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:37.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.09330821s Jan 29 23:22:37.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:38.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:38.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:39.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.090946576s Jan 29 23:22:39.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:39.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.092584488s Jan 29 23:22:39.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:40.327: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:40.327: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:41.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.090673767s Jan 29 23:22:41.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:41.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.092626927s Jan 29 23:22:41.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:42.327: INFO: Node bootstrap-e2e-minion-group-6721 didn't reach desired Ready condition status (false) within 2m0s Jan 29 23:22:42.327: INFO: Node bootstrap-e2e-minion-group-wqbh didn't reach desired Ready condition status (false) within 2m0s Jan 29 23:22:43.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.091037452s Jan 29 23:22:43.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:43.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.092323691s Jan 29 23:22:43.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:45.144: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.096581519s Jan 29 23:22:45.144: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:45.144: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.096855495s Jan 29 23:22:45.144: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:47.141: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.093869386s Jan 29 23:22:47.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.093743055s Jan 29 23:22:47.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:47.141: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:49.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.090566559s Jan 29 23:22:49.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:49.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.091967075s Jan 29 23:22:49.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:51.166: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.118661418s Jan 29 23:22:51.166: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:51.166: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.118641233s Jan 29 23:22:51.166: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:53.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.090222649s Jan 29 23:22:53.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:53.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.091932593s Jan 29 23:22:53.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:55.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.091148905s Jan 29 23:22:55.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:55.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.092886614s Jan 29 23:22:55.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:57.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.091993283s Jan 29 23:22:57.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:57.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.093812633s Jan 29 23:22:57.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:59.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.090618459s Jan 29 23:22:59.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:59.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.092931641s Jan 29 23:22:59.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:01.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.090651135s Jan 29 23:23:01.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:01.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.092970314s Jan 29 23:23:01.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:03.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.090088764s Jan 29 23:23:03.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:03.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.092471459s Jan 29 23:23:03.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:05.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.090164541s Jan 29 23:23:05.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:05.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.091683355s Jan 29 23:23:05.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:07.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.091719453s Jan 29 23:23:07.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:07.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.092828136s Jan 29 23:23:07.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:09.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.092981924s Jan 29 23:23:09.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:09.140: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.093125957s Jan 29 23:23:09.140: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:11.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.092838366s Jan 29 23:23:11.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:11.141: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.09390196s Jan 29 23:23:11.141: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:13.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.091590951s Jan 29 23:23:13.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:13.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.093055803s Jan 29 23:23:13.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:15.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.09087927s Jan 29 23:23:15.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:15.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.092277155s Jan 29 23:23:15.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:17.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.090978362s Jan 29 23:23:17.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:17.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.093959416s Jan 29 23:23:17.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:19.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.090573825s Jan 29 23:23:19.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:19.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.093490909s Jan 29 23:23:19.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:21.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.090728334s Jan 29 23:23:21.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:21.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.093019834s Jan 29 23:23:21.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:23.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.090362839s Jan 29 23:23:23.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:23.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.092902057s Jan 29 23:23:23.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:25.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.091319166s Jan 29 23:23:25.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:25.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.092639523s Jan 29 23:23:25.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:27.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.091393076s Jan 29 23:23:27.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:27.142: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.095058032s Jan 29 23:23:27.142: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:29.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.090802202s Jan 29 23:23:29.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:29.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.092095913s Jan 29 23:23:29.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:31.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m50.091511387s Jan 29 23:23:31.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:31.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.092438805s Jan 29 23:23:31.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:33.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m52.090108788s Jan 29 23:23:33.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:33.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.093235585s Jan 29 23:23:33.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:35.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m54.090502236s Jan 29 23:23:35.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:35.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.092489155s Jan 29 23:23:35.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:37.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.091231018s Jan 29 23:23:37.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:37.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.09344769s Jan 29 23:23:37.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:39.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.09110638s Jan 29 23:23:39.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:39.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.092170245s Jan 29 23:23:39.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:41.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.09203382s Jan 29 23:23:41.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:41.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.093393083s Jan 29 23:23:41.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:43.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m2.09043443s Jan 29 23:23:43.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:43.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.092899388s Jan 29 23:23:43.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:45.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m4.090260906s Jan 29 23:23:45.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:45.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.093787102s Jan 29 23:23:45.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:47.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m6.091478789s Jan 29 23:23:47.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:47.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.092926904s Jan 29 23:23:47.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:49.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m8.090092173s Jan 29 23:23:49.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:49.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.092912458s Jan 29 23:23:49.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:51.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m10.09101644s Jan 29 23:23:51.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:51.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.092425745s Jan 29 23:23:51.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:53.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.09092525s Jan 29 23:23:53.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:53.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.09221553s Jan 29 23:23:53.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:55.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.090232968s Jan 29 23:23:55.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:55.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.091392055s Jan 29 23:23:55.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:57.140: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.092462263s Jan 29 23:23:57.140: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:57.146: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.098325811s Jan 29 23:23:57.146: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:59.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.090494777s Jan 29 23:23:59.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:59.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.092996604s Jan 29 23:23:59.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:01.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.090499461s Jan 29 23:24:01.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:01.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.091927316s Jan 29 23:24:01.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:03.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.090357058s Jan 29 23:24:03.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:03.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.093085597s Jan 29 23:24:03.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:05.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.090467186s Jan 29 23:24:05.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:05.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.09306267s Jan 29 23:24:05.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:07.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.091809218s Jan 29 23:24:07.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:07.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.093886188s Jan 29 23:24:07.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:09.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.092200912s Jan 29 23:24:09.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:09.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.093223205s Jan 29 23:24:09.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:11.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.090407785s Jan 29 23:24:11.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:11.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.092823099s Jan 29 23:24:11.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:13.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.090573573s Jan 29 23:24:13.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:13.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.093045544s Jan 29 23:24:13.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:15.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.092116317s Jan 29 23:24:15.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:15.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.093373119s Jan 29 23:24:15.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:17.140: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.092591712s Jan 29 23:24:17.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.092464361s Jan 29 23:24:17.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:17.140: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:19.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.090465946s Jan 29 23:24:19.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:19.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.091736649s Jan 29 23:24:19.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:21.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.090362342s Jan 29 23:24:21.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:21.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.092608037s Jan 29 23:24:21.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:23.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.091242396s Jan 29 23:24:23.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:23.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.092640273s Jan 29 23:24:23.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:25.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.091525016s Jan 29 23:24:25.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:25.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.093000799s Jan 29 23:24:25.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:27.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.091853087s Jan 29 23:24:27.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:27.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.093611454s Jan 29 23:24:27.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:29.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.090101289s Jan 29 23:24:29.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:29.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.092438908s Jan 29 23:24:29.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:31.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.090081439s Jan 29 23:24:31.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:31.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.093047264s Jan 29 23:24:31.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:33.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.090873003s Jan 29 23:24:33.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:33.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.092253143s Jan 29 23:24:33.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:35.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.091251818s Jan 29 23:24:35.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:35.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.092751889s Jan 29 23:24:35.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:37.140: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.092909993s Jan 29 23:24:37.140: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:37.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.093988992s Jan 29 23:24:37.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:39.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.090379839s Jan 29 23:24:39.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:39.144: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.096466048s Jan 29 23:24:39.144: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:41.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.090424309s Jan 29 23:24:41.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:41.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.092822824s Jan 29 23:24:41.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:43.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.091499248s Jan 29 23:24:43.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:43.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.092983985s Jan 29 23:24:43.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:45.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.091123163s Jan 29 23:24:45.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:45.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.092419478s Jan 29 23:24:45.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:47.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.090745158s Jan 29 23:24:47.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:47.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.093982433s Jan 29 23:24:47.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:49.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.090937741s Jan 29 23:24:49.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:49.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.092267427s Jan 29 23:24:49.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:51.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.090987169s Jan 29 23:24:51.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:51.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.092186708s Jan 29 23:24:51.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:53.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.089762389s Jan 29 23:24:53.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:53.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.092358567s Jan 29 23:24:53.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:55.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.090268114s Jan 29 23:24:55.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:55.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.092527632s Jan 29 23:24:55.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:57.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.090889797s Jan 29 23:24:57.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:57.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.093134222s Jan 29 23:24:57.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:59.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.0903326s Jan 29 23:24:59.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:59.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.091885096s Jan 29 23:24:59.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:01.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.09021149s Jan 29 23:25:01.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:01.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.091504334s Jan 29 23:25:01.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:03.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.090739307s Jan 29 23:25:03.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:03.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.092390257s Jan 29 23:25:03.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:05.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.091074865s Jan 29 23:25:05.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:05.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.092368443s Jan 29 23:25:05.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:07.146: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.099055877s Jan 29 23:25:07.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:07.149: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.101505139s Jan 29 23:25:07.149: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:09.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.090844089s Jan 29 23:25:09.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:09.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.092415599s Jan 29 23:25:09.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:11.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.092008064s Jan 29 23:25:11.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:11.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.093037937s Jan 29 23:25:11.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:13.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.090111063s Jan 29 23:25:13.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:13.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.092717913s Jan 29 23:25:13.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:15.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.090363139s Jan 29 23:25:15.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:15.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.092506751s Jan 29 23:25:15.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:17.141: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.093573405s Jan 29 23:25:17.141: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:17.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.093514131s Jan 29 23:25:17.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:19.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.092712912s Jan 29 23:25:19.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:19.141: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.093960329s Jan 29 23:25:19.141: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:21.140: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.093108498s Jan 29 23:25:21.140: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:21.157: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.109233381s Jan 29 23:25:21.157: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:23.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.09039987s Jan 29 23:25:23.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:23.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.092724829s Jan 29 23:25:23.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:25.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.091357096s Jan 29 23:25:25.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:25.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.093238727s Jan 29 23:25:25.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:27.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.091191958s Jan 29 23:25:27.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:27.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.093472628s Jan 29 23:25:27.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:29.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.090456495s Jan 29 23:25:29.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:29.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.092753157s Jan 29 23:25:29.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:31.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.090215578s Jan 29 23:25:31.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:31.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.092516551s Jan 29 23:25:31.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:33.142: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.095036424s Jan 29 23:25:33.142: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:33.143: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.096046106s Jan 29 23:25:33.143: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:35.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.090755343s Jan 29 23:25:35.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:35.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.092149725s Jan 29 23:25:35.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:37.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.090928537s Jan 29 23:25:37.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:37.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.093556841s Jan 29 23:25:37.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:39.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.090417824s Jan 29 23:25:39.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:39.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.093218749s Jan 29 23:25:39.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards (Spec Runtime: 6m16.834s) test/e2e/cloud/gcp/reboot.go:144 In [It] (Node Runtime: 5m0.001s) test/e2e/cloud/gcp/reboot.go:144 Spec Goroutine goroutine 7962 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc0009e92d8?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fa980334a20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fa980334a20?, 0xc0041cee40}, {0x8147108?, 0xc004b3d860}, {0xc00015c680, 0x187}, 0xc004ae05a0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.8({0x7fa980334a20, 0xc0041cee40}) test/e2e/cloud/gcp/reboot.go:149 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0041cee40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7965 [chan receive, 5 minutes] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x7fa980334a20?, 0xc0041cee40}, {0x8147108?, 0xc004b3d860}, {0x76d190b, 0xb}, {0xc000bb0540, 0x4, 0x4}, 0x45d964b800, ...) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReadyOrSucceeded(...) test/e2e/framework/pod/resource.go:508 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fa980334a20, 0xc0041cee40}, {0x8147108, 0xc004b3d860}, {0x7ffed4d515ee, 0x3}, {0xc00482d5e0, 0x1f}, {0xc00015c680, 0x187}) test/e2e/cloud/gcp/reboot.go:284 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 23:25:41.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.090684153s Jan 29 23:25:41.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:41.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.092780367s Jan 29 23:25:41.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:41.180: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.132632821s Jan 29 23:25:41.180: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:41.180: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 29 23:25:41.182: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.134480313s Jan 29 23:25:41.182: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:41.182: INFO: Pod kube-dns-autoscaler-5f6455f985-fnk2j failed to be running and ready, or succeeded. Jan 29 23:25:41.182: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-fnk2j kube-proxy-bootstrap-e2e-minion-group-88l0 metadata-proxy-v0.1-2vqlc volume-snapshot-controller-0] Jan 29 23:25:41.182: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-fnk2j: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:02:22 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:16:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:16:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:02:22 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP: PodIPs:[] StartTime:2023-01-29 23:02:22 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:&ContainerStateWaiting{Reason:,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:5 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://2a572fc68ee46a9092bd3974970fa73307b65446abcf982e614a7bda96792a22 Started:0xc001257d1a}] QOSClass:Burstable EphemeralContainerStatuses:[]} Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards (Spec Runtime: 6m36.837s) test/e2e/cloud/gcp/reboot.go:144 In [It] (Node Runtime: 5m20.004s) test/e2e/cloud/gcp/reboot.go:144 Spec Goroutine goroutine 7962 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0009e92d8?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fa980334a20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fa980334a20?, 0xc0041cee40}, {0x8147108?, 0xc004b3d860}, {0xc00015c680, 0x187}, 0xc004ae05a0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.8({0x7fa980334a20, 0xc0041cee40}) test/e2e/cloud/gcp/reboot.go:149 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0041cee40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7965 [select, 2 minutes] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000aa2d80, 0xc002686400) vendor/golang.org/x/net/http2/transport.go:1273 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc0013df710, 0xc002686400, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:565 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:517 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc002bf0000?}, 0xc002686400?) vendor/golang.org/x/net/http2/transport.go:3099 net/http.(*Transport).roundTrip(0xc002bf0000, 0xc002686400) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x70de840?, 0xc000671800?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0049fa3c0, 0xc002686300) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc00020f9c0, 0xc002686200) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc002686200, {0x80d5d80, 0xc00020f9c0}, {0x75d65c0?, 0x2675501?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0049fa3f0, 0xc002686200, {0x0?, 0x8?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0049fa3f0, 0xc002686200) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00531e480, {0x7fa980334a20, 0xc0041cee40}, 0x7fa9805a55a8?) vendor/k8s.io/client-go/rest/request.go:981 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00531e480, {0x7fa980334a20, 0xc0041cee40}) vendor/k8s.io/client-go/rest/request.go:1022 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x7fa980334a20, 0xc0041cee40}, {0x8147108?, 0xc004b3d860?}, {0xc001257ad0, 0xb}, {0xc002f1dfb0, 0x24}, {0xc001257d20, 0xa}, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/cloud/gcp.printStatusAndLogsForNotReadyPods({0x7fa980334a20, 0xc0041cee40}, {0x8147108, 0xc004b3d860}, {0x76d190b, 0xb}, {0xc000bb0540?, 0xc000bb0540?, 0x4?}, {0xc000bb0480, ...}) test/e2e/cloud/gcp/reboot.go:221 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fa980334a20, 0xc0041cee40}, {0x8147108, 0xc004b3d860}, {0x7ffed4d515ee, 0x3}, {0xc00482d5e0, 0x1f}, {0xc00015c680, 0x187}) test/e2e/cloud/gcp/reboot.go:285 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 23:26:11.226: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-fnk2j/autoscaler, err: an error on the server ("unknown") has prevented the request from succeeding (get pods kube-dns-autoscaler-5f6455f985-fnk2j): Jan 29 23:26:11.226: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-fnk2j/autoscaler, err: an error on the server ("unknown") has prevented the request from succeeding (get pods kube-dns-autoscaler-5f6455f985-fnk2j): Jan 29 23:26:11.226: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:02:22 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:15:17 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:15:17 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:02:22 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP: PodIPs:[] StartTime:2023-01-29 23:02:22 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 23:13:04 +0000 UTC,FinishedAt:2023-01-29 23:13:35 +0000 UTC,ContainerID:containerd://9f1b3f62046306065becc07178c0c35df2575ff44e8f2df8723b0024b1585573,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:7 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://9f1b3f62046306065becc07178c0c35df2575ff44e8f2df8723b0024b1585573 Started:0xc004da2737}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards (Spec Runtime: 6m56.839s) test/e2e/cloud/gcp/reboot.go:144 In [It] (Node Runtime: 5m40.006s) test/e2e/cloud/gcp/reboot.go:144 Spec Goroutine goroutine 7962 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0009e92d8?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fa980334a20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fa980334a20?, 0xc0041cee40}, {0x8147108?, 0xc004b3d860}, {0xc00015c680, 0x187}, 0xc004ae05a0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.8({0x7fa980334a20, 0xc0041cee40}) test/e2e/cloud/gcp/reboot.go:149 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0041cee40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7965 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000aa2d80, 0xc000341400) vendor/golang.org/x/net/http2/transport.go:1273 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc0013df710, 0xc000341400, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:565 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:517 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc002bf0000?}, 0xc000341400?) vendor/golang.org/x/net/http2/transport.go:3099 net/http.(*Transport).roundTrip(0xc002bf0000, 0xc000341400) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x70de840?, 0xc0006717d0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0049fa3c0, 0xc000340400) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc00020f9c0, 0xc0011a1e00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0011a1e00, {0x80d5d80, 0xc00020f9c0}, {0x75d65c0?, 0x2675701?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0049fa3f0, 0xc0011a1e00, {0x0?, 0x8?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0049fa3f0, 0xc0011a1e00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc004500000, {0x7fa980334a20, 0xc0041cee40}, 0x7fa9805a55a8?) vendor/k8s.io/client-go/rest/request.go:981 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc004500000, {0x7fa980334a20, 0xc0041cee40}) vendor/k8s.io/client-go/rest/request.go:1022 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x7fa980334a20, 0xc0041cee40}, {0x8147108?, 0xc004b3d860?}, {0xc004da2590, 0xb}, {0xc000500d40, 0x1c}, {0xc000501120, 0x1a}, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/cloud/gcp.printStatusAndLogsForNotReadyPods({0x7fa980334a20, 0xc0041cee40}, {0x8147108, 0xc004b3d860}, {0x76d190b, 0xb}, {0xc000bb0540?, 0xc000bb0540?, 0x4?}, {0xc000bb0480, ...}) test/e2e/cloud/gcp/reboot.go:221 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fa980334a20, 0xc0041cee40}, {0x8147108, 0xc004b3d860}, {0x7ffed4d515ee, 0x3}, {0xc00482d5e0, 0x1f}, {0xc00015c680, 0x187}) test/e2e/cloud/gcp/reboot.go:285 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards (Spec Runtime: 7m16.841s) test/e2e/cloud/gcp/reboot.go:144 In [It] (Node Runtime: 6m0.008s) test/e2e/cloud/gcp/reboot.go:144 Spec Goroutine goroutine 7962 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0009e92d8?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fa980334a20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fa980334a20?, 0xc0041cee40}, {0x8147108?, 0xc004b3d860}, {0xc00015c680, 0x187}, 0xc004ae05a0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.8({0x7fa980334a20, 0xc0041cee40}) test/e2e/cloud/gcp/reboot.go:149 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0041cee40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7965 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000aa2d80, 0xc000341400) vendor/golang.org/x/net/http2/transport.go:1273 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc0013df710, 0xc000341400, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:565 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:517 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc002bf0000?}, 0xc000341400?) vendor/golang.org/x/net/http2/transport.go:3099 net/http.(*Transport).roundTrip(0xc002bf0000, 0xc000341400) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x70de840?, 0xc0006717d0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0049fa3c0, 0xc000340400) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc00020f9c0, 0xc0011a1e00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0011a1e00, {0x80d5d80, 0xc00020f9c0}, {0x75d65c0?, 0x2675701?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0049fa3f0, 0xc0011a1e00, {0x0?, 0x8?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0049fa3f0, 0xc0011a1e00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc004500000, {0x7fa980334a20, 0xc0041cee40}, 0x7fa9805a55a8?) vendor/k8s.io/client-go/rest/request.go:981 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc004500000, {0x7fa980334a20, 0xc0041cee40}) vendor/k8s.io/client-go/rest/request.go:1022 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x7fa980334a20, 0xc0041cee40}, {0x8147108?, 0xc004b3d860?}, {0xc004da2590, 0xb}, {0xc000500d40, 0x1c}, {0xc000501120, 0x1a}, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/cloud/gcp.printStatusAndLogsForNotReadyPods({0x7fa980334a20, 0xc0041cee40}, {0x8147108, 0xc004b3d860}, {0x76d190b, 0xb}, {0xc000bb0540?, 0xc000bb0540?, 0x4?}, {0xc000bb0480, ...}) test/e2e/cloud/gcp/reboot.go:221 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fa980334a20, 0xc0041cee40}, {0x8147108, 0xc004b3d860}, {0x7ffed4d515ee, 0x3}, {0xc00482d5e0, 0x1f}, {0xc00015c680, 0x187}) test/e2e/cloud/gcp/reboot.go:285 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 23:26:41.272: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: an error on the server ("unknown") has prevented the request from succeeding (get pods volume-snapshot-controller-0): Jan 29 23:26:41.272: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: an error on the server ("unknown") has prevented the request from succeeding (get pods volume-snapshot-controller-0): Jan 29 23:26:41.272: INFO: Node bootstrap-e2e-minion-group-6721 failed reboot test. Jan 29 23:26:41.272: INFO: Node bootstrap-e2e-minion-group-88l0 failed reboot test. Jan 29 23:26:41.272: INFO: Node bootstrap-e2e-minion-group-wqbh failed reboot test. Jan 29 23:26:41.272: INFO: Executing termination hook on nodes Jan 29 23:26:41.272: INFO: Getting external IP address for bootstrap-e2e-minion-group-6721 Jan 29 23:26:41.272: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-6721(35.197.20.238:22) Jan 29 23:26:41.791: INFO: ssh prow@35.197.20.238:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 23:26:41.791: INFO: ssh prow@35.197.20.238:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 23:20:51 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 23:26:41.791: INFO: ssh prow@35.197.20.238:22: stderr: "" Jan 29 23:26:41.791: INFO: ssh prow@35.197.20.238:22: exit code: 0 Jan 29 23:26:41.791: INFO: Getting external IP address for bootstrap-e2e-minion-group-88l0 Jan 29 23:26:41.791: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-88l0(34.127.39.177:22) Jan 29 23:26:42.315: INFO: ssh prow@34.127.39.177:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 23:26:42.315: INFO: ssh prow@34.127.39.177:22: stdout: "" Jan 29 23:26:42.315: INFO: ssh prow@34.127.39.177:22: stderr: "cat: /tmp/drop-outbound.log: No such file or directory\n" Jan 29 23:26:42.315: INFO: ssh prow@34.127.39.177:22: exit code: 1 Jan 29 23:26:42.315: INFO: Error while issuing ssh command: failed running "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log": <nil> (exit code 1, stderr cat: /tmp/drop-outbound.log: No such file or directory ) Jan 29 23:26:42.315: INFO: Getting external IP address for bootstrap-e2e-minion-group-wqbh Jan 29 23:26:42.315: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-wqbh(35.185.219.215:22) Jan 29 23:26:42.834: INFO: ssh prow@35.185.219.215:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 23:26:42.834: INFO: ssh prow@35.185.219.215:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 23:20:51 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 23:26:42.834: INFO: ssh prow@35.185.219.215:22: stderr: "" Jan 29 23:26:42.834: INFO: ssh prow@35.185.219.215:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 23:26:42.834 < Exit [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 23:26:42.834 (6m2.065s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 23:26:42.834 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 23:26:42.835 Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-4q7fd to bootstrap-e2e-minion-group-88l0 Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.847441403s (3.847450323s including waiting) Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Readiness probe failed: Get "http://10.64.1.5:8181/ready": dial tcp 10.64.1.5:8181: connect: connection refused Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Readiness probe failed: Get "http://10.64.1.14:8181/ready": dial tcp 10.64.1.14:8181: connect: connection refused Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-4q7fd_kube-system(4a425660-a466-48ab-85da-437da7e618a6) Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Liveness probe failed: Get "http://10.64.1.20:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-4q7fd Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-4q7fd Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-gs5tb to bootstrap-e2e-minion-group-6721 Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 987.339ms (987.349102ms including waiting) Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Readiness probe failed: Get "http://10.64.3.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Liveness probe failed: Get "http://10.64.3.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Killing: Container coredns failed liveness probe, will be restarted Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-4q7fd Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-gs5tb Jan 29 23:26:42.895: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 23:26:42.895: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 23:26:42.895: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 23:26:42.895: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 23:26:42.895: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 23:26:42.895: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 23:26:42.895: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 23:26:42.895: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 23:26:42.895: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 23:26:42.895: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 23:26:42.895: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 23:26:42.895: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 23:26:42.895: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1c62f became leader Jan 29 23:26:42.896: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_2342f became leader Jan 29 23:26:42.896: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_7cda5 became leader Jan 29 23:26:42.896: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ad7db became leader Jan 29 23:26:42.896: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_6de46 became leader Jan 29 23:26:42.896: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_73bee became leader Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-29t5v to bootstrap-e2e-minion-group-6721 Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 625.689877ms (625.698369ms including waiting) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Killing: Stopping container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-29t5v_kube-system(676a6fef-fae9-419c-967c-3c4cabf3b4d0) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-9qvb2 to bootstrap-e2e-minion-group-88l0 Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 3.147175294s (3.147183862s including waiting) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9qvb2_kube-system(94c1b8a4-d0c2-46a0-bfef-385846a587df) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Liveness probe failed: Get "http://10.64.1.19:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9qvb2_kube-system(94c1b8a4-d0c2-46a0-bfef-385846a587df) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-gx2gz to bootstrap-e2e-minion-group-wqbh Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 588.144499ms (588.157829ms including waiting) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-9qvb2 Jan 29 23:26:42.896: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-gx2gz Jan 29 23:26:42.896: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-29t5v Jan 29 23:26:42.896: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 23:26:42.896: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 23:26:42.896: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 23:26:42.896: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 23:26:42.896: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 23:26:42.896: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 23:26:42.896: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 23:26:42.896: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 23:26:42.896: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 23:26:42.896: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 23:26:42.896: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 23:26:42.896: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 23:26:42.896: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 23:26:42.896: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 23:26:42.896: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 23:26:42.896: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused Jan 29 23:26:42.896: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 23:26:42.896: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 23:26:42.896: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_1f0b58b9-e3af-40cd-bbf1-df962a1a7d66 became leader Jan 29 23:26:42.896: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_7e71952d-71c6-4e33-aa88-f17783879913 became leader Jan 29 23:26:42.896: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_0cd69aca-881b-4381-9d05-c8ecaf96e1ad became leader Jan 29 23:26:42.896: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c4f828ec-1e3d-476b-946c-47cb9fad7392 became leader Jan 29 23:26:42.896: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_73715983-9a20-46cb-94e2-8a8288f2370d became leader Jan 29 23:26:42.896: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_de0acc38-ace8-4a25-be45-fcbfe09e87e1 became leader Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-fnk2j to bootstrap-e2e-minion-group-88l0 Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.672896561s (3.672911757s including waiting) Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container autoscaler Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container autoscaler Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container autoscaler Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fnk2j_kube-system(4c651565-44f7-46bb-aab2-f09040397115) Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-fnk2j Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-fnk2j Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container autoscaler Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container autoscaler Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container autoscaler Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container autoscaler Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-fnk2j Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Killing: Stopping container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6721_kube-system(8c98108cbd9aa73159be1e4bea9c87b5) Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Killing: Stopping container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6721_kube-system(8c98108cbd9aa73159be1e4bea9c87b5) Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Killing: Stopping container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6721_kube-system(8c98108cbd9aa73159be1e4bea9c87b5) Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-88l0_kube-system(b29b9d68971e1a4886acdb5b2f3d6c29) Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Stopping container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-wqbh_kube-system(f7bfadae6ed5c61f5cb8ce9584aa18a1) Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Stopping container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-wqbh_kube-system(f7bfadae6ed5c61f5cb8ce9584aa18a1) Jan 29 23:26:42.896: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 23:26:42.896: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 23:26:42.896: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 23:26:42.896: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 23:26:42.896: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_b314dba1-2f34-450d-a940-e032ea959007 became leader Jan 29 23:26:42.896: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e150da42-06fa-4222-afb3-02a801863fea became leader Jan 29 23:26:42.896: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ebce7161-2751-44a8-921a-1bb0c61c7457 became leader Jan 29 23:26:42.896: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c0e4b771-3ff3-4c12-aabe-26c80f1386d0 became leader Jan 29 23:26:42.896: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_253cc4ed-29f8-49a2-b15e-60323a46a8b4 became leader Jan 29 23:26:42.896: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c46daebb-994e-4579-b065-bb062f087e10 became leader Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-6rtzm to bootstrap-e2e-minion-group-88l0 Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 2.563762473s (2.56377916s including waiting) Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container default-http-backend Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container default-http-backend Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Liveness probe failed: Get "http://10.64.1.4:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-6rtzm Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-6rtzm Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container default-http-backend Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container default-http-backend Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-6rtzm Jan 29 23:26:42.896: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 23:26:42.896: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 23:26:42.896: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 23:26:42.896: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 23:26:42.896: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 23:26:42.896: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 23:26:42.896: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-2vqlc to bootstrap-e2e-minion-group-88l0 Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 682.317948ms (682.333508ms including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.703991017s (1.704005096s including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-8rcgp to bootstrap-e2e-master Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 719.810753ms (719.817586ms including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.259551358s (2.259558434s including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-f9lnv to bootstrap-e2e-minion-group-wqbh Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 798.75825ms (798.781902ms including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.750147538s (1.750158281s including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-qj6hk to bootstrap-e2e-minion-group-6721 Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 782.318965ms (782.337545ms including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.760006025s (1.760016285s including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-8rcgp Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-2vqlc Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-f9lnv Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-qj6hk Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-89n8r to bootstrap-e2e-minion-group-88l0 Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.9815654s (1.981573878s including waiting) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.814773422s (2.814784611s including waiting) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-89n8r Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-89n8r Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-krpjl to bootstrap-e2e-minion-group-wqbh Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.247580285s (1.247590338s including waiting) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 963.154622ms (963.162996ms including waiting) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": dial tcp 10.64.2.3:10250: connect: connection refused Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": dial tcp 10.64.2.3:10250: connect: connection refused Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Stopping container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Stopping container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": read tcp 10.64.2.1:43390->10.64.2.3:10250: read: connection reset by peer Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Failed: Error: failed to get sandbox container task: no running task found: task 5506be8bc4f89096ef778ab7fca9cfaf82b1876ebd96d8c3fd4f25d1ff33f02a not found: not found Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-krpjl_kube-system(8d0e7263-4537-45a1-934b-f1c130ff5bbc) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-krpjl Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Readiness probe failed: Get "https://10.64.2.9:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-krpjl Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-88l0 Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.130537452s (2.130545397s including waiting) Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container volume-snapshot-controller Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container volume-snapshot-controller Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container volume-snapshot-controller Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(bd0dd270-555b-4436-b406-8a283304f5bb) Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container volume-snapshot-controller Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container volume-snapshot-controller Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container volume-snapshot-controller Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(bd0dd270-555b-4436-b406-8a283304f5bb) Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 23:26:42.896 (62ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 23:26:42.896 Jan 29 23:26:42.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 23:26:42.942 (46ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 23:26:42.942 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 23:26:42.942 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 23:26:42.942 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 23:26:42.943 STEP: Collecting events from namespace "reboot-161". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 23:26:42.943 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 23:26:42.984 Jan 29 23:26:43.025: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 23:26:43.025: INFO: Jan 29 23:26:43.070: INFO: Logging node info for node bootstrap-e2e-master Jan 29 23:26:43.113: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 756bb90a-38ca-46e9-a519-4ade71c98037 3160 0 2023-01-29 23:02:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 23:02:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 23:02:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 23:02:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 23:22:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-gci-slow/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 23:02:22 +0000 UTC,LastTransitionTime:2023-01-29 23:02:22 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:02:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:02:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:02:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:02:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.1.140,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-gci-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-gci-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5c3129ce97a3f63b40e57e6cbe733c44,SystemUUID:5c3129ce-97a3-f63b-40e5-7e6cbe733c44,BootID:b96cb4ed-7649-46c7-9666-6fc4b47e90dd,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 23:26:43.113: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 23:26:43.161: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 23:27:00.198: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Jan 29 23:27:00.198: INFO: Logging node info for node bootstrap-e2e-minion-group-6721 Jan 29 23:27:00.241: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6721 7fc012f3-934c-45a9-9218-4db83f456958 3166 0 2023-01-29 23:02:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6721 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 23:02:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 23:16:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 23:16:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 23:22:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 23:22:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-gci-slow/us-west1-b/bootstrap-e2e-minion-group-6721,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 23:22:49 +0000 UTC,LastTransitionTime:2023-01-29 23:16:47 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 23:22:49 +0000 UTC,LastTransitionTime:2023-01-29 23:16:47 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 23:22:49 +0000 UTC,LastTransitionTime:2023-01-29 23:16:47 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 23:22:49 +0000 UTC,LastTransitionTime:2023-01-29 23:16:47 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 23:22:49 +0000 UTC,LastTransitionTime:2023-01-29 23:16:47 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 23:22:49 +0000 UTC,LastTransitionTime:2023-01-29 23:16:47 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 23:22:49 +0000 UTC,LastTransitionTime:2023-01-29 23:16:47 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 23:02:22 +0000 UTC,LastTransitionTime:2023-01-29 23:02:22 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:53 +0000 UTC,LastTransitionTime:2023-01-29 23:16:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:53 +0000 UTC,LastTransitionTime:2023-01-29 23:16:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:53 +0000 UTC,LastTransitionTime:2023-01-29 23:16:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 23:22:53 +0000 UTC,LastTransitionTime:2023-01-29 23:16:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.20.238,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6721.c.k8s-jkns-e2e-gce-gci-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6721.c.k8s-jkns-e2e-gce-gci-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8984c0af3840918eae4114a902d64191,SystemUUID:8984c0af-3840-918e-ae41-14a902d64191,BootID:453eaaa2-8c9a-45b6-91c7-cfc147f61b33,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 23:27:00.241: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6721 Jan 29 23:27:00.288: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6721 Jan 29 23:27:00.332: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6721: error trying to reach service: No agent available Jan 29 23:27:00.332: INFO: Logging node info for node bootstrap-e2e-minion-group-88l0 Jan 29 23:27:00.374: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-88l0 fab5e132-8ec0-42cf-9ad7-40e6250ae11b 3568 0 2023-01-29 23:02:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-88l0 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 23:02:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 23:16:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 23:16:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 23:26:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 23:26:59 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-gci-slow/us-west1-b/bootstrap-e2e-minion-group-88l0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 23:26:48 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 23:26:48 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 23:26:48 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 23:26:48 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 23:26:48 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 23:26:48 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 23:26:48 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 23:02:22 +0000 UTC,LastTransitionTime:2023-01-29 23:02:22 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 23:26:59 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 23:26:59 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 23:26:59 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 23:26:59 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.127.39.177,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-88l0.c.k8s-jkns-e2e-gce-gci-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-88l0.c.k8s-jkns-e2e-gce-gci-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0a690fed9ac751f6671eec4a21f76bad,SystemUUID:0a690fed-9ac7-51f6-671e-ec4a21f76bad,BootID:ec68c44f-1879-4b0a-a545-6219e5196494,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 23:27:00.374: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-88l0 Jan 29 23:27:00.422: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-88l0 Jan 29 23:27:00.465: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-88l0: error trying to reach service: No agent available Jan 29 23:27:00.465: INFO: Logging node info for node bootstrap-e2e-minion-group-wqbh Jan 29 23:27:00.508: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-wqbh bb57fe13-d7c7-4a05-8930-31b3fcc9decd 3167 0 2023-01-29 23:02:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-wqbh kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 23:02:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 23:16:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 23:16:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 23:22:52 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 23:22:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-gci-slow/us-west1-b/bootstrap-e2e-minion-group-wqbh,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 23:22:46 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 23:22:46 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 23:22:46 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 23:22:46 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 23:22:46 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 23:22:46 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 23:22:46 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 23:02:22 +0000 UTC,LastTransitionTime:2023-01-29 23:02:22 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.185.219.215,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-wqbh.c.k8s-jkns-e2e-gce-gci-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-wqbh.c.k8s-jkns-e2e-gce-gci-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:40d8e2caa5d7596dbe41807842d5c069,SystemUUID:40d8e2ca-a5d7-596d-be41-807842d5c069,BootID:6aa05efe-09f8-4f76-a55c-b374dc158bb7,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 23:27:00.508: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-wqbh Jan 29 23:27:00.556: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-wqbh Jan 29 23:27:00.600: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-wqbh: error trying to reach service: No agent available END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 23:27:00.6 (17.657s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 23:27:00.6 (17.658s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 23:27:00.6 STEP: Destroying namespace "reboot-161" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 23:27:00.6 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 23:27:00.644 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 23:27:00.645 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 23:27:00.645 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 23:26:42.834from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 23:19:23.936 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 23:19:23.936 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 23:19:23.936 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 23:19:23.936 Jan 29 23:19:23.936: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 23:19:23.938 Jan 29 23:19:23.977: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:26.017: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:28.017: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:30.017: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:32.018: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:34.018: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:36.018: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:38.018: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:40.018: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:42.018: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:44.017: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:46.017: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:48.017: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:50.017: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:52.018: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused Jan 29 23:19:54.019: INFO: Unexpected error while creating namespace: Post "https://35.230.1.140/api/v1/namespaces": dial tcp 35.230.1.140:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 23:20:40.519 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 23:20:40.654 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 23:20:40.77 (1m16.833s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 23:20:40.77 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 23:20:40.77 (0s) > Enter [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 23:20:40.77 Jan 29 23:20:40.953: INFO: Getting bootstrap-e2e-minion-group-88l0 Jan 29 23:20:40.954: INFO: Getting bootstrap-e2e-minion-group-6721 Jan 29 23:20:40.954: INFO: Getting bootstrap-e2e-minion-group-wqbh Jan 29 23:20:41.001: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-wqbh condition Ready to be true Jan 29 23:20:41.002: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-88l0 condition Ready to be true Jan 29 23:20:41.002: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6721 condition Ready to be true Jan 29 23:20:41.047: INFO: Node bootstrap-e2e-minion-group-wqbh has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-wqbh metadata-proxy-v0.1-f9lnv] Jan 29 23:20:41.047: INFO: Node bootstrap-e2e-minion-group-6721 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-6721 metadata-proxy-v0.1-qj6hk] Jan 29 23:20:41.047: INFO: Node bootstrap-e2e-minion-group-88l0 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-fnk2j kube-proxy-bootstrap-e2e-minion-group-88l0 metadata-proxy-v0.1-2vqlc volume-snapshot-controller-0] Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-wqbh metadata-proxy-v0.1-f9lnv] Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-fnk2j kube-proxy-bootstrap-e2e-minion-group-88l0 metadata-proxy-v0.1-2vqlc volume-snapshot-controller-0] Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-f9lnv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-6721 metadata-proxy-v0.1-qj6hk] Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-qj6hk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-wqbh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6721" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-88l0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-fnk2j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.047: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2vqlc" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:20:41.095: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.218796ms Jan 29 23:20:41.095: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:41.095: INFO: Pod "metadata-proxy-v0.1-qj6hk": Phase="Running", Reason="", readiness=true. Elapsed: 48.286591ms Jan 29 23:20:41.095: INFO: Pod "metadata-proxy-v0.1-qj6hk" satisfied condition "running and ready, or succeeded" Jan 29 23:20:41.098: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 50.595555ms Jan 29 23:20:41.098: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:41.099: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6721": Phase="Running", Reason="", readiness=true. Elapsed: 51.769605ms Jan 29 23:20:41.099: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6721" satisfied condition "running and ready, or succeeded" Jan 29 23:20:41.099: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-6721 metadata-proxy-v0.1-qj6hk] Jan 29 23:20:41.099: INFO: Getting external IP address for bootstrap-e2e-minion-group-6721 Jan 29 23:20:41.099: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-6721(35.197.20.238:22) Jan 29 23:20:41.099: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wqbh": Phase="Running", Reason="", readiness=true. Elapsed: 52.209489ms Jan 29 23:20:41.099: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wqbh" satisfied condition "running and ready, or succeeded" Jan 29 23:20:41.099: INFO: Pod "metadata-proxy-v0.1-2vqlc": Phase="Running", Reason="", readiness=true. Elapsed: 52.186892ms Jan 29 23:20:41.100: INFO: Pod "metadata-proxy-v0.1-2vqlc" satisfied condition "running and ready, or succeeded" Jan 29 23:20:41.100: INFO: Pod "metadata-proxy-v0.1-f9lnv": Phase="Running", Reason="", readiness=true. Elapsed: 52.426998ms Jan 29 23:20:41.100: INFO: Pod "metadata-proxy-v0.1-f9lnv" satisfied condition "running and ready, or succeeded" Jan 29 23:20:41.100: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-wqbh metadata-proxy-v0.1-f9lnv] Jan 29 23:20:41.100: INFO: Getting external IP address for bootstrap-e2e-minion-group-wqbh Jan 29 23:20:41.100: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-wqbh(35.185.219.215:22) Jan 29 23:20:41.100: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-88l0": Phase="Running", Reason="", readiness=true. Elapsed: 52.582703ms Jan 29 23:20:41.100: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-88l0" satisfied condition "running and ready, or succeeded" Jan 29 23:20:41.633: INFO: ssh prow@35.185.219.215:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 23:20:41.633: INFO: ssh prow@35.185.219.215:22: stdout: "" Jan 29 23:20:41.633: INFO: ssh prow@35.185.219.215:22: stderr: "" Jan 29 23:20:41.633: INFO: ssh prow@35.185.219.215:22: exit code: 0 Jan 29 23:20:41.633: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-wqbh condition Ready to be false Jan 29 23:20:41.635: INFO: ssh prow@35.197.20.238:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 23:20:41.635: INFO: ssh prow@35.197.20.238:22: stdout: "" Jan 29 23:20:41.635: INFO: ssh prow@35.197.20.238:22: stderr: "" Jan 29 23:20:41.635: INFO: ssh prow@35.197.20.238:22: exit code: 0 Jan 29 23:20:41.635: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6721 condition Ready to be false Jan 29 23:20:41.676: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:41.679: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:43.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.090315654s Jan 29 23:20:43.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:43.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092927572s Jan 29 23:20:43.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:43.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:43.721: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:45.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.091035946s Jan 29 23:20:45.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:45.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092220674s Jan 29 23:20:45.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:45.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:45.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:47.142: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.094768431s Jan 29 23:20:47.142: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:47.147: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099674828s Jan 29 23:20:47.147: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:47.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:47.833: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:49.143: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.095777827s Jan 29 23:20:49.143: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:49.145: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098132214s Jan 29 23:20:49.145: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:49.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:49.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:51.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.091980672s Jan 29 23:20:51.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:51.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093873781s Jan 29 23:20:51.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:51.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:51.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:53.142: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.094584997s Jan 29 23:20:53.142: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:53.142: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 12.094908429s Jan 29 23:20:53.142: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:54.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:54.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:55.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.091028414s Jan 29 23:20:55.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:55.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 14.092515902s Jan 29 23:20:55.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:56.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:56.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:57.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.091631604s Jan 29 23:20:57.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:57.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 16.094002001s Jan 29 23:20:57.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:20:58.096: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:58.096: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:20:59.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.09071481s Jan 29 23:20:59.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:20:59.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 18.091957984s Jan 29 23:20:59.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:00.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:00.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:01.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.090754497s Jan 29 23:21:01.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:01.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 20.092901148s Jan 29 23:21:01.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:02.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:02.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:03.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.091538054s Jan 29 23:21:03.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:03.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 22.093620678s Jan 29 23:21:03.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:04.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:04.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:05.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.090206873s Jan 29 23:21:05.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:05.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 24.092404398s Jan 29 23:21:05.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:06.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:06.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:07.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.09149144s Jan 29 23:21:07.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:07.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 26.092889672s Jan 29 23:21:07.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:08.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:08.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:09.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.090979107s Jan 29 23:21:09.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:09.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 28.092330612s Jan 29 23:21:09.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:10.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:10.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:11.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.090635217s Jan 29 23:21:11.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:11.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 30.092928386s Jan 29 23:21:11.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:12.414: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:12.414: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:13.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.09132865s Jan 29 23:21:13.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:13.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 32.093364392s Jan 29 23:21:13.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:14.459: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:14.459: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:15.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.090683987s Jan 29 23:21:15.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:15.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 34.092010001s Jan 29 23:21:15.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:16.503: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:16.503: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:17.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.090851234s Jan 29 23:21:17.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:17.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 36.093340064s Jan 29 23:21:17.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:18.547: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:18.547: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:19.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.090517276s Jan 29 23:21:19.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:19.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 38.092387334s Jan 29 23:21:19.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:20.591: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:20.592: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:21.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.090893642s Jan 29 23:21:21.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:21.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 40.092445454s Jan 29 23:21:21.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:22.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:22.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:23.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.091035617s Jan 29 23:21:23.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:23.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 42.092317294s Jan 29 23:21:23.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:24.681: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:24.681: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:25.187: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 44.14s Jan 29 23:21:25.187: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:25.188: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.141187937s Jan 29 23:21:25.188: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:26.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:26.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:27.140: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.092395681s Jan 29 23:21:27.140: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:27.142: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 46.094971004s Jan 29 23:21:27.142: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:28.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:28.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:29.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.090270202s Jan 29 23:21:29.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:29.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 48.092514874s Jan 29 23:21:29.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:30.816: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:30.816: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:31.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.090179348s Jan 29 23:21:31.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:31.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 50.092723582s Jan 29 23:21:31.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:32.860: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:32.860: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:33.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.091154748s Jan 29 23:21:33.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:33.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 52.092224958s Jan 29 23:21:33.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:34.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:34.904: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:35.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.091528766s Jan 29 23:21:35.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:35.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 54.0930205s Jan 29 23:21:35.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:36.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:36.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:37.140: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.09277479s Jan 29 23:21:37.140: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:37.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 56.093818188s Jan 29 23:21:37.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:38.992: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:38.992: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:39.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.09062543s Jan 29 23:21:39.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:39.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 58.092037294s Jan 29 23:21:39.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:41.036: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:41.036: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:41.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.091456492s Jan 29 23:21:41.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:41.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.093107436s Jan 29 23:21:41.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:43.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:43.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:43.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.091026662s Jan 29 23:21:43.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:43.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.092551607s Jan 29 23:21:43.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:45.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:45.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:45.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.090461407s Jan 29 23:21:45.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:45.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.092075271s Jan 29 23:21:45.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:47.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.093602456s Jan 29 23:21:47.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:47.142: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.094828447s Jan 29 23:21:47.142: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:47.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:47.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:49.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.091687817s Jan 29 23:21:49.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:49.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.092537219s Jan 29 23:21:49.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:49.214: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:49.214: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:51.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.090597022s Jan 29 23:21:51.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:51.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.091896241s Jan 29 23:21:51.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:51.259: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:51.259: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:53.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.090331869s Jan 29 23:21:53.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:53.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.092710767s Jan 29 23:21:53.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:53.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:53.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:55.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.090816069s Jan 29 23:21:55.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:55.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.092279511s Jan 29 23:21:55.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:55.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:55.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:57.142: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.094616961s Jan 29 23:21:57.142: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.09449305s Jan 29 23:21:57.142: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:57.142: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:57.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:57.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:59.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.091364146s Jan 29 23:21:59.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:21:59.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.092721654s Jan 29 23:21:59.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:21:59.440: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:21:59.440: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:01.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.091640966s Jan 29 23:22:01.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:01.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.09260959s Jan 29 23:22:01.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:01.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:01.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:03.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.091445228s Jan 29 23:22:03.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:03.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.092819528s Jan 29 23:22:03.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:03.529: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:03.529: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:05.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.091260988s Jan 29 23:22:05.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:05.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.092616172s Jan 29 23:22:05.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:05.573: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:05.573: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:07.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.09196965s Jan 29 23:22:07.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:07.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.094025824s Jan 29 23:22:07.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:07.618: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:07.618: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:09.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.090387395s Jan 29 23:22:09.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:09.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.092157966s Jan 29 23:22:09.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:09.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:09.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:11.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.091432957s Jan 29 23:22:11.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:11.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.093091896s Jan 29 23:22:11.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:11.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:11.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:13.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.09007788s Jan 29 23:22:13.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:13.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.093040244s Jan 29 23:22:13.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:13.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:13.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:15.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.090393697s Jan 29 23:22:15.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:15.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.092510529s Jan 29 23:22:15.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:15.794: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:15.794: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:17.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.090897654s Jan 29 23:22:17.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:17.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.093524143s Jan 29 23:22:17.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:17.838: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:17.838: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:19.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.090860704s Jan 29 23:22:19.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:19.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.092047873s Jan 29 23:22:19.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:19.881: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:19.881: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:21.142: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.095362817s Jan 29 23:22:21.143: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:21.143: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.095841581s Jan 29 23:22:21.143: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:21.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:21.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:23.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.090284073s Jan 29 23:22:23.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:23.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.092369887s Jan 29 23:22:23.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:23.969: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:23.969: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:25.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.090117426s Jan 29 23:22:25.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:25.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.092581254s Jan 29 23:22:25.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:26.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:26.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:27.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.091855693s Jan 29 23:22:27.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:27.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.094174742s Jan 29 23:22:27.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:28.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:28.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:29.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.09041197s Jan 29 23:22:29.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:29.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.092421129s Jan 29 23:22:29.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:30.106: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:30.106: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:31.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.090489788s Jan 29 23:22:31.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:31.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.092797362s Jan 29 23:22:31.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:32.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:32.151: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:33.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.091628486s Jan 29 23:22:33.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:33.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.093245231s Jan 29 23:22:33.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:34.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:34.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:35.141: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.094017389s Jan 29 23:22:35.141: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:35.143: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.095633111s Jan 29 23:22:35.143: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:36.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:36.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:37.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.090635951s Jan 29 23:22:37.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:37.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.09330821s Jan 29 23:22:37.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:38.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:38.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:39.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.090946576s Jan 29 23:22:39.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:39.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.092584488s Jan 29 23:22:39.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:40.327: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:40.327: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:22:41.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.090673767s Jan 29 23:22:41.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:41.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.092626927s Jan 29 23:22:41.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:42.327: INFO: Node bootstrap-e2e-minion-group-6721 didn't reach desired Ready condition status (false) within 2m0s Jan 29 23:22:42.327: INFO: Node bootstrap-e2e-minion-group-wqbh didn't reach desired Ready condition status (false) within 2m0s Jan 29 23:22:43.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.091037452s Jan 29 23:22:43.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:43.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.092323691s Jan 29 23:22:43.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:45.144: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.096581519s Jan 29 23:22:45.144: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:45.144: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.096855495s Jan 29 23:22:45.144: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:47.141: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.093869386s Jan 29 23:22:47.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.093743055s Jan 29 23:22:47.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:47.141: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:49.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.090566559s Jan 29 23:22:49.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:49.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.091967075s Jan 29 23:22:49.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:51.166: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.118661418s Jan 29 23:22:51.166: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:51.166: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.118641233s Jan 29 23:22:51.166: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:53.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.090222649s Jan 29 23:22:53.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:53.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.091932593s Jan 29 23:22:53.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:55.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.091148905s Jan 29 23:22:55.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:55.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.092886614s Jan 29 23:22:55.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:57.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.091993283s Jan 29 23:22:57.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:57.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.093812633s Jan 29 23:22:57.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:22:59.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.090618459s Jan 29 23:22:59.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:22:59.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.092931641s Jan 29 23:22:59.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:01.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.090651135s Jan 29 23:23:01.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:01.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.092970314s Jan 29 23:23:01.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:03.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.090088764s Jan 29 23:23:03.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:03.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.092471459s Jan 29 23:23:03.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:05.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.090164541s Jan 29 23:23:05.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:05.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.091683355s Jan 29 23:23:05.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:07.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.091719453s Jan 29 23:23:07.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:07.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.092828136s Jan 29 23:23:07.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:09.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.092981924s Jan 29 23:23:09.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:09.140: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.093125957s Jan 29 23:23:09.140: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:11.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.092838366s Jan 29 23:23:11.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:11.141: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.09390196s Jan 29 23:23:11.141: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:13.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.091590951s Jan 29 23:23:13.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:13.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.093055803s Jan 29 23:23:13.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:15.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.09087927s Jan 29 23:23:15.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:15.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.092277155s Jan 29 23:23:15.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:17.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.090978362s Jan 29 23:23:17.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:17.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.093959416s Jan 29 23:23:17.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:19.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.090573825s Jan 29 23:23:19.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:19.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.093490909s Jan 29 23:23:19.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:21.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.090728334s Jan 29 23:23:21.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:21.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.093019834s Jan 29 23:23:21.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:23.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.090362839s Jan 29 23:23:23.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:23.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.092902057s Jan 29 23:23:23.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:25.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.091319166s Jan 29 23:23:25.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:25.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.092639523s Jan 29 23:23:25.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:27.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.091393076s Jan 29 23:23:27.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:27.142: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.095058032s Jan 29 23:23:27.142: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:29.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.090802202s Jan 29 23:23:29.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:29.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.092095913s Jan 29 23:23:29.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:31.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m50.091511387s Jan 29 23:23:31.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:31.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.092438805s Jan 29 23:23:31.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:33.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m52.090108788s Jan 29 23:23:33.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:33.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.093235585s Jan 29 23:23:33.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:35.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m54.090502236s Jan 29 23:23:35.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:35.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.092489155s Jan 29 23:23:35.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:37.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.091231018s Jan 29 23:23:37.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:37.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.09344769s Jan 29 23:23:37.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:39.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.09110638s Jan 29 23:23:39.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:39.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.092170245s Jan 29 23:23:39.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:41.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.09203382s Jan 29 23:23:41.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:41.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.093393083s Jan 29 23:23:41.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:43.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m2.09043443s Jan 29 23:23:43.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:43.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.092899388s Jan 29 23:23:43.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:45.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m4.090260906s Jan 29 23:23:45.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:45.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.093787102s Jan 29 23:23:45.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:47.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m6.091478789s Jan 29 23:23:47.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:47.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.092926904s Jan 29 23:23:47.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:49.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m8.090092173s Jan 29 23:23:49.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:49.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.092912458s Jan 29 23:23:49.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:51.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m10.09101644s Jan 29 23:23:51.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:51.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.092425745s Jan 29 23:23:51.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:53.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.09092525s Jan 29 23:23:53.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:53.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.09221553s Jan 29 23:23:53.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:55.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.090232968s Jan 29 23:23:55.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:55.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.091392055s Jan 29 23:23:55.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:57.140: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.092462263s Jan 29 23:23:57.140: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:57.146: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.098325811s Jan 29 23:23:57.146: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:23:59.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.090494777s Jan 29 23:23:59.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:23:59.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.092996604s Jan 29 23:23:59.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:01.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.090499461s Jan 29 23:24:01.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:01.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.091927316s Jan 29 23:24:01.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:03.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.090357058s Jan 29 23:24:03.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:03.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.093085597s Jan 29 23:24:03.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:05.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.090467186s Jan 29 23:24:05.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:05.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.09306267s Jan 29 23:24:05.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:07.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.091809218s Jan 29 23:24:07.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:07.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.093886188s Jan 29 23:24:07.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:09.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.092200912s Jan 29 23:24:09.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:09.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.093223205s Jan 29 23:24:09.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:11.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.090407785s Jan 29 23:24:11.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:11.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.092823099s Jan 29 23:24:11.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:13.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.090573573s Jan 29 23:24:13.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:13.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.093045544s Jan 29 23:24:13.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:15.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.092116317s Jan 29 23:24:15.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:15.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.093373119s Jan 29 23:24:15.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:17.140: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.092591712s Jan 29 23:24:17.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.092464361s Jan 29 23:24:17.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:17.140: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:19.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.090465946s Jan 29 23:24:19.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:19.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.091736649s Jan 29 23:24:19.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:21.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.090362342s Jan 29 23:24:21.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:21.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.092608037s Jan 29 23:24:21.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:23.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.091242396s Jan 29 23:24:23.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:23.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.092640273s Jan 29 23:24:23.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:25.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.091525016s Jan 29 23:24:25.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:25.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.093000799s Jan 29 23:24:25.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:27.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.091853087s Jan 29 23:24:27.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:27.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.093611454s Jan 29 23:24:27.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:29.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.090101289s Jan 29 23:24:29.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:29.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.092438908s Jan 29 23:24:29.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:31.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.090081439s Jan 29 23:24:31.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:31.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.093047264s Jan 29 23:24:31.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:33.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.090873003s Jan 29 23:24:33.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:33.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.092253143s Jan 29 23:24:33.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:35.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.091251818s Jan 29 23:24:35.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:35.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.092751889s Jan 29 23:24:35.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:37.140: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.092909993s Jan 29 23:24:37.140: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:37.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.093988992s Jan 29 23:24:37.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:39.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.090379839s Jan 29 23:24:39.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:39.144: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.096466048s Jan 29 23:24:39.144: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:41.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.090424309s Jan 29 23:24:41.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:41.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.092822824s Jan 29 23:24:41.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:43.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.091499248s Jan 29 23:24:43.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:43.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.092983985s Jan 29 23:24:43.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:45.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.091123163s Jan 29 23:24:45.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:45.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.092419478s Jan 29 23:24:45.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:47.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.090745158s Jan 29 23:24:47.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:47.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.093982433s Jan 29 23:24:47.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:49.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.090937741s Jan 29 23:24:49.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:49.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.092267427s Jan 29 23:24:49.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:51.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.090987169s Jan 29 23:24:51.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:51.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.092186708s Jan 29 23:24:51.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:53.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.089762389s Jan 29 23:24:53.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:53.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.092358567s Jan 29 23:24:53.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:55.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.090268114s Jan 29 23:24:55.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:55.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.092527632s Jan 29 23:24:55.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:57.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.090889797s Jan 29 23:24:57.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:57.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.093134222s Jan 29 23:24:57.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:24:59.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.0903326s Jan 29 23:24:59.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:24:59.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.091885096s Jan 29 23:24:59.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:01.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.09021149s Jan 29 23:25:01.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:01.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.091504334s Jan 29 23:25:01.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:03.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.090739307s Jan 29 23:25:03.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:03.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.092390257s Jan 29 23:25:03.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:05.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.091074865s Jan 29 23:25:05.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:05.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.092368443s Jan 29 23:25:05.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:07.146: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.099055877s Jan 29 23:25:07.146: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:07.149: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.101505139s Jan 29 23:25:07.149: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:09.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.090844089s Jan 29 23:25:09.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:09.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.092415599s Jan 29 23:25:09.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:11.139: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.092008064s Jan 29 23:25:11.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:11.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.093037937s Jan 29 23:25:11.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:13.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.090111063s Jan 29 23:25:13.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:13.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.092717913s Jan 29 23:25:13.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:15.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.090363139s Jan 29 23:25:15.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:15.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.092506751s Jan 29 23:25:15.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:17.141: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.093573405s Jan 29 23:25:17.141: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:17.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.093514131s Jan 29 23:25:17.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:19.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.092712912s Jan 29 23:25:19.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:19.141: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.093960329s Jan 29 23:25:19.141: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:21.140: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.093108498s Jan 29 23:25:21.140: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:21.157: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.109233381s Jan 29 23:25:21.157: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:23.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.09039987s Jan 29 23:25:23.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:23.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.092724829s Jan 29 23:25:23.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:25.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.091357096s Jan 29 23:25:25.139: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:25.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.093238727s Jan 29 23:25:25.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:27.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.091191958s Jan 29 23:25:27.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:27.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.093472628s Jan 29 23:25:27.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:29.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.090456495s Jan 29 23:25:29.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:29.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.092753157s Jan 29 23:25:29.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:31.137: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.090215578s Jan 29 23:25:31.137: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:31.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.092516551s Jan 29 23:25:31.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:33.142: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.095036424s Jan 29 23:25:33.142: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:33.143: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.096046106s Jan 29 23:25:33.143: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:35.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.090755343s Jan 29 23:25:35.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:35.139: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.092149725s Jan 29 23:25:35.139: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:37.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.090928537s Jan 29 23:25:37.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:37.141: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.093556841s Jan 29 23:25:37.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:39.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.090417824s Jan 29 23:25:39.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:39.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.093218749s Jan 29 23:25:39.141: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards (Spec Runtime: 6m16.834s) test/e2e/cloud/gcp/reboot.go:144 In [It] (Node Runtime: 5m0.001s) test/e2e/cloud/gcp/reboot.go:144 Spec Goroutine goroutine 7962 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc0009e92d8?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fa980334a20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fa980334a20?, 0xc0041cee40}, {0x8147108?, 0xc004b3d860}, {0xc00015c680, 0x187}, 0xc004ae05a0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.8({0x7fa980334a20, 0xc0041cee40}) test/e2e/cloud/gcp/reboot.go:149 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0041cee40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7965 [chan receive, 5 minutes] k8s.io/kubernetes/test/e2e/framework/pod.checkPodsCondition({0x7fa980334a20?, 0xc0041cee40}, {0x8147108?, 0xc004b3d860}, {0x76d190b, 0xb}, {0xc000bb0540, 0x4, 0x4}, 0x45d964b800, ...) test/e2e/framework/pod/resource.go:531 k8s.io/kubernetes/test/e2e/framework/pod.CheckPodsRunningReadyOrSucceeded(...) test/e2e/framework/pod/resource.go:508 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fa980334a20, 0xc0041cee40}, {0x8147108, 0xc004b3d860}, {0x7ffed4d515ee, 0x3}, {0xc00482d5e0, 0x1f}, {0xc00015c680, 0x187}) test/e2e/cloud/gcp/reboot.go:284 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 23:25:41.138: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.090684153s Jan 29 23:25:41.138: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:41.140: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.092780367s Jan 29 23:25:41.140: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:41.180: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.132632821s Jan 29 23:25:41.180: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:15:17 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:25:41.180: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 29 23:25:41.182: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.134480313s Jan 29 23:25:41.182: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'kube-dns-autoscaler-5f6455f985-fnk2j' on 'bootstrap-e2e-minion-group-88l0' to be 'Running' but was 'Pending' Jan 29 23:25:41.182: INFO: Pod kube-dns-autoscaler-5f6455f985-fnk2j failed to be running and ready, or succeeded. Jan 29 23:25:41.182: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-fnk2j kube-proxy-bootstrap-e2e-minion-group-88l0 metadata-proxy-v0.1-2vqlc volume-snapshot-controller-0] Jan 29 23:25:41.182: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-fnk2j: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:02:22 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:16:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:16:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [autoscaler]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:02:22 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP: PodIPs:[] StartTime:2023-01-29 23:02:22 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:&ContainerStateWaiting{Reason:,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:5 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://2a572fc68ee46a9092bd3974970fa73307b65446abcf982e614a7bda96792a22 Started:0xc001257d1a}] QOSClass:Burstable EphemeralContainerStatuses:[]} Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards (Spec Runtime: 6m36.837s) test/e2e/cloud/gcp/reboot.go:144 In [It] (Node Runtime: 5m20.004s) test/e2e/cloud/gcp/reboot.go:144 Spec Goroutine goroutine 7962 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0009e92d8?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fa980334a20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fa980334a20?, 0xc0041cee40}, {0x8147108?, 0xc004b3d860}, {0xc00015c680, 0x187}, 0xc004ae05a0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.8({0x7fa980334a20, 0xc0041cee40}) test/e2e/cloud/gcp/reboot.go:149 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0041cee40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7965 [select, 2 minutes] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000aa2d80, 0xc002686400) vendor/golang.org/x/net/http2/transport.go:1273 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc0013df710, 0xc002686400, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:565 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:517 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc002bf0000?}, 0xc002686400?) vendor/golang.org/x/net/http2/transport.go:3099 net/http.(*Transport).roundTrip(0xc002bf0000, 0xc002686400) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x70de840?, 0xc000671800?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0049fa3c0, 0xc002686300) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc00020f9c0, 0xc002686200) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc002686200, {0x80d5d80, 0xc00020f9c0}, {0x75d65c0?, 0x2675501?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0049fa3f0, 0xc002686200, {0x0?, 0x8?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0049fa3f0, 0xc002686200) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00531e480, {0x7fa980334a20, 0xc0041cee40}, 0x7fa9805a55a8?) vendor/k8s.io/client-go/rest/request.go:981 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00531e480, {0x7fa980334a20, 0xc0041cee40}) vendor/k8s.io/client-go/rest/request.go:1022 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x7fa980334a20, 0xc0041cee40}, {0x8147108?, 0xc004b3d860?}, {0xc001257ad0, 0xb}, {0xc002f1dfb0, 0x24}, {0xc001257d20, 0xa}, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/cloud/gcp.printStatusAndLogsForNotReadyPods({0x7fa980334a20, 0xc0041cee40}, {0x8147108, 0xc004b3d860}, {0x76d190b, 0xb}, {0xc000bb0540?, 0xc000bb0540?, 0x4?}, {0xc000bb0480, ...}) test/e2e/cloud/gcp/reboot.go:221 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fa980334a20, 0xc0041cee40}, {0x8147108, 0xc004b3d860}, {0x7ffed4d515ee, 0x3}, {0xc00482d5e0, 0x1f}, {0xc00015c680, 0x187}) test/e2e/cloud/gcp/reboot.go:285 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 23:26:11.226: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-fnk2j/autoscaler, err: an error on the server ("unknown") has prevented the request from succeeding (get pods kube-dns-autoscaler-5f6455f985-fnk2j): Jan 29 23:26:11.226: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-fnk2j/autoscaler, err: an error on the server ("unknown") has prevented the request from succeeding (get pods kube-dns-autoscaler-5f6455f985-fnk2j): Jan 29 23:26:11.226: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:02:22 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:15:17 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:15:17 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 23:02:22 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP: PodIPs:[] StartTime:2023-01-29 23:02:22 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 23:13:04 +0000 UTC,FinishedAt:2023-01-29 23:13:35 +0000 UTC,ContainerID:containerd://9f1b3f62046306065becc07178c0c35df2575ff44e8f2df8723b0024b1585573,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:7 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://9f1b3f62046306065becc07178c0c35df2575ff44e8f2df8723b0024b1585573 Started:0xc004da2737}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards (Spec Runtime: 6m56.839s) test/e2e/cloud/gcp/reboot.go:144 In [It] (Node Runtime: 5m40.006s) test/e2e/cloud/gcp/reboot.go:144 Spec Goroutine goroutine 7962 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0009e92d8?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fa980334a20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fa980334a20?, 0xc0041cee40}, {0x8147108?, 0xc004b3d860}, {0xc00015c680, 0x187}, 0xc004ae05a0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.8({0x7fa980334a20, 0xc0041cee40}) test/e2e/cloud/gcp/reboot.go:149 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0041cee40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7965 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000aa2d80, 0xc000341400) vendor/golang.org/x/net/http2/transport.go:1273 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc0013df710, 0xc000341400, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:565 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:517 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc002bf0000?}, 0xc000341400?) vendor/golang.org/x/net/http2/transport.go:3099 net/http.(*Transport).roundTrip(0xc002bf0000, 0xc000341400) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x70de840?, 0xc0006717d0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0049fa3c0, 0xc000340400) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc00020f9c0, 0xc0011a1e00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0011a1e00, {0x80d5d80, 0xc00020f9c0}, {0x75d65c0?, 0x2675701?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0049fa3f0, 0xc0011a1e00, {0x0?, 0x8?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0049fa3f0, 0xc0011a1e00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc004500000, {0x7fa980334a20, 0xc0041cee40}, 0x7fa9805a55a8?) vendor/k8s.io/client-go/rest/request.go:981 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc004500000, {0x7fa980334a20, 0xc0041cee40}) vendor/k8s.io/client-go/rest/request.go:1022 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x7fa980334a20, 0xc0041cee40}, {0x8147108?, 0xc004b3d860?}, {0xc004da2590, 0xb}, {0xc000500d40, 0x1c}, {0xc000501120, 0x1a}, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/cloud/gcp.printStatusAndLogsForNotReadyPods({0x7fa980334a20, 0xc0041cee40}, {0x8147108, 0xc004b3d860}, {0x76d190b, 0xb}, {0xc000bb0540?, 0xc000bb0540?, 0x4?}, {0xc000bb0480, ...}) test/e2e/cloud/gcp/reboot.go:221 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fa980334a20, 0xc0041cee40}, {0x8147108, 0xc004b3d860}, {0x7ffed4d515ee, 0x3}, {0xc00482d5e0, 0x1f}, {0xc00015c680, 0x187}) test/e2e/cloud/gcp/reboot.go:285 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards (Spec Runtime: 7m16.841s) test/e2e/cloud/gcp/reboot.go:144 In [It] (Node Runtime: 6m0.008s) test/e2e/cloud/gcp/reboot.go:144 Spec Goroutine goroutine 7962 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0009e92d8?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7fa980334a20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7fa980334a20?, 0xc0041cee40}, {0x8147108?, 0xc004b3d860}, {0xc00015c680, 0x187}, 0xc004ae05a0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.8({0x7fa980334a20, 0xc0041cee40}) test/e2e/cloud/gcp/reboot.go:149 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0041cee40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7965 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000aa2d80, 0xc000341400) vendor/golang.org/x/net/http2/transport.go:1273 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc0013df710, 0xc000341400, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:565 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:517 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc002bf0000?}, 0xc000341400?) vendor/golang.org/x/net/http2/transport.go:3099 net/http.(*Transport).roundTrip(0xc002bf0000, 0xc000341400) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x70de840?, 0xc0006717d0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0049fa3c0, 0xc000340400) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc00020f9c0, 0xc0011a1e00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0011a1e00, {0x80d5d80, 0xc00020f9c0}, {0x75d65c0?, 0x2675701?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0049fa3f0, 0xc0011a1e00, {0x0?, 0x8?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0049fa3f0, 0xc0011a1e00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc004500000, {0x7fa980334a20, 0xc0041cee40}, 0x7fa9805a55a8?) vendor/k8s.io/client-go/rest/request.go:981 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc004500000, {0x7fa980334a20, 0xc0041cee40}) vendor/k8s.io/client-go/rest/request.go:1022 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x7fa980334a20, 0xc0041cee40}, {0x8147108?, 0xc004b3d860?}, {0xc004da2590, 0xb}, {0xc000500d40, 0x1c}, {0xc000501120, 0x1a}, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/cloud/gcp.printStatusAndLogsForNotReadyPods({0x7fa980334a20, 0xc0041cee40}, {0x8147108, 0xc004b3d860}, {0x76d190b, 0xb}, {0xc000bb0540?, 0xc000bb0540?, 0x4?}, {0xc000bb0480, ...}) test/e2e/cloud/gcp/reboot.go:221 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7fa980334a20, 0xc0041cee40}, {0x8147108, 0xc004b3d860}, {0x7ffed4d515ee, 0x3}, {0xc00482d5e0, 0x1f}, {0xc00015c680, 0x187}) test/e2e/cloud/gcp/reboot.go:285 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 23:26:41.272: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: an error on the server ("unknown") has prevented the request from succeeding (get pods volume-snapshot-controller-0): Jan 29 23:26:41.272: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: an error on the server ("unknown") has prevented the request from succeeding (get pods volume-snapshot-controller-0): Jan 29 23:26:41.272: INFO: Node bootstrap-e2e-minion-group-6721 failed reboot test. Jan 29 23:26:41.272: INFO: Node bootstrap-e2e-minion-group-88l0 failed reboot test. Jan 29 23:26:41.272: INFO: Node bootstrap-e2e-minion-group-wqbh failed reboot test. Jan 29 23:26:41.272: INFO: Executing termination hook on nodes Jan 29 23:26:41.272: INFO: Getting external IP address for bootstrap-e2e-minion-group-6721 Jan 29 23:26:41.272: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-6721(35.197.20.238:22) Jan 29 23:26:41.791: INFO: ssh prow@35.197.20.238:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 23:26:41.791: INFO: ssh prow@35.197.20.238:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 23:20:51 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 23:26:41.791: INFO: ssh prow@35.197.20.238:22: stderr: "" Jan 29 23:26:41.791: INFO: ssh prow@35.197.20.238:22: exit code: 0 Jan 29 23:26:41.791: INFO: Getting external IP address for bootstrap-e2e-minion-group-88l0 Jan 29 23:26:41.791: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-88l0(34.127.39.177:22) Jan 29 23:26:42.315: INFO: ssh prow@34.127.39.177:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 23:26:42.315: INFO: ssh prow@34.127.39.177:22: stdout: "" Jan 29 23:26:42.315: INFO: ssh prow@34.127.39.177:22: stderr: "cat: /tmp/drop-outbound.log: No such file or directory\n" Jan 29 23:26:42.315: INFO: ssh prow@34.127.39.177:22: exit code: 1 Jan 29 23:26:42.315: INFO: Error while issuing ssh command: failed running "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log": <nil> (exit code 1, stderr cat: /tmp/drop-outbound.log: No such file or directory ) Jan 29 23:26:42.315: INFO: Getting external IP address for bootstrap-e2e-minion-group-wqbh Jan 29 23:26:42.315: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-wqbh(35.185.219.215:22) Jan 29 23:26:42.834: INFO: ssh prow@35.185.219.215:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 23:26:42.834: INFO: ssh prow@35.185.219.215:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 23:20:51 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 23:26:42.834: INFO: ssh prow@35.185.219.215:22: stderr: "" Jan 29 23:26:42.834: INFO: ssh prow@35.185.219.215:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 23:26:42.834 < Exit [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 23:26:42.834 (6m2.065s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 23:26:42.834 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 23:26:42.835 Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-4q7fd to bootstrap-e2e-minion-group-88l0 Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.847441403s (3.847450323s including waiting) Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Readiness probe failed: Get "http://10.64.1.5:8181/ready": dial tcp 10.64.1.5:8181: connect: connection refused Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Readiness probe failed: Get "http://10.64.1.14:8181/ready": dial tcp 10.64.1.14:8181: connect: connection refused Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-4q7fd_kube-system(4a425660-a466-48ab-85da-437da7e618a6) Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Liveness probe failed: Get "http://10.64.1.20:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-4q7fd Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-4q7fd Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-gs5tb to bootstrap-e2e-minion-group-6721 Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 987.339ms (987.349102ms including waiting) Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Readiness probe failed: Get "http://10.64.3.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Liveness probe failed: Get "http://10.64.3.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Killing: Container coredns failed liveness probe, will be restarted Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container coredns Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-4q7fd Jan 29 23:26:42.895: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-gs5tb Jan 29 23:26:42.895: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 23:26:42.895: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 23:26:42.895: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 23:26:42.895: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 23:26:42.895: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 23:26:42.895: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 23:26:42.895: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 23:26:42.895: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 23:26:42.895: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 23:26:42.895: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 23:26:42.895: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.895: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 23:26:42.895: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 23:26:42.895: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1c62f became leader Jan 29 23:26:42.896: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_2342f became leader Jan 29 23:26:42.896: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_7cda5 became leader Jan 29 23:26:42.896: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ad7db became leader Jan 29 23:26:42.896: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_6de46 became leader Jan 29 23:26:42.896: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_73bee became leader Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-29t5v to bootstrap-e2e-minion-group-6721 Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 625.689877ms (625.698369ms including waiting) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Killing: Stopping container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-29t5v_kube-system(676a6fef-fae9-419c-967c-3c4cabf3b4d0) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-9qvb2 to bootstrap-e2e-minion-group-88l0 Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 3.147175294s (3.147183862s including waiting) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9qvb2_kube-system(94c1b8a4-d0c2-46a0-bfef-385846a587df) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Liveness probe failed: Get "http://10.64.1.19:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9qvb2_kube-system(94c1b8a4-d0c2-46a0-bfef-385846a587df) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-gx2gz to bootstrap-e2e-minion-group-wqbh Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 588.144499ms (588.157829ms including waiting) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container konnectivity-agent Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-9qvb2 Jan 29 23:26:42.896: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-gx2gz Jan 29 23:26:42.896: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-29t5v Jan 29 23:26:42.896: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 23:26:42.896: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 23:26:42.896: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 23:26:42.896: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 23:26:42.896: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 23:26:42.896: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 23:26:42.896: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 23:26:42.896: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 23:26:42.896: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 23:26:42.896: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 23:26:42.896: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 23:26:42.896: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 23:26:42.896: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 23:26:42.896: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 23:26:42.896: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 23:26:42.896: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 23:26:42.896: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused Jan 29 23:26:42.896: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 23:26:42.896: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 23:26:42.896: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_1f0b58b9-e3af-40cd-bbf1-df962a1a7d66 became leader Jan 29 23:26:42.896: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_7e71952d-71c6-4e33-aa88-f17783879913 became leader Jan 29 23:26:42.896: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_0cd69aca-881b-4381-9d05-c8ecaf96e1ad became leader Jan 29 23:26:42.896: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c4f828ec-1e3d-476b-946c-47cb9fad7392 became leader Jan 29 23:26:42.896: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_73715983-9a20-46cb-94e2-8a8288f2370d became leader Jan 29 23:26:42.896: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_de0acc38-ace8-4a25-be45-fcbfe09e87e1 became leader Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-fnk2j to bootstrap-e2e-minion-group-88l0 Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.672896561s (3.672911757s including waiting) Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container autoscaler Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container autoscaler Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container autoscaler Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fnk2j_kube-system(4c651565-44f7-46bb-aab2-f09040397115) Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-fnk2j Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-fnk2j Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container autoscaler Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container autoscaler Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container autoscaler Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container autoscaler Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-fnk2j Jan 29 23:26:42.896: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Killing: Stopping container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6721_kube-system(8c98108cbd9aa73159be1e4bea9c87b5) Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Killing: Stopping container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6721_kube-system(8c98108cbd9aa73159be1e4bea9c87b5) Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Killing: Stopping container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6721_kube-system(8c98108cbd9aa73159be1e4bea9c87b5) Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-88l0_kube-system(b29b9d68971e1a4886acdb5b2f3d6c29) Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Stopping container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-wqbh_kube-system(f7bfadae6ed5c61f5cb8ce9584aa18a1) Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Stopping container kube-proxy Jan 29 23:26:42.896: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-wqbh_kube-system(f7bfadae6ed5c61f5cb8ce9584aa18a1) Jan 29 23:26:42.896: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:26:42.896: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 23:26:42.896: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 23:26:42.896: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 23:26:42.896: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 23:26:42.896: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_b314dba1-2f34-450d-a940-e032ea959007 became leader Jan 29 23:26:42.896: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e150da42-06fa-4222-afb3-02a801863fea became leader Jan 29 23:26:42.896: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ebce7161-2751-44a8-921a-1bb0c61c7457 became leader Jan 29 23:26:42.896: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c0e4b771-3ff3-4c12-aabe-26c80f1386d0 became leader Jan 29 23:26:42.896: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_253cc4ed-29f8-49a2-b15e-60323a46a8b4 became leader Jan 29 23:26:42.896: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c46daebb-994e-4579-b065-bb062f087e10 became leader Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-6rtzm to bootstrap-e2e-minion-group-88l0 Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 2.563762473s (2.56377916s including waiting) Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container default-http-backend Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container default-http-backend Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Liveness probe failed: Get "http://10.64.1.4:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-6rtzm Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-6rtzm Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container default-http-backend Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container default-http-backend Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-6rtzm Jan 29 23:26:42.896: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 23:26:42.896: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 23:26:42.896: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 23:26:42.896: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 23:26:42.896: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 23:26:42.896: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 23:26:42.896: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-2vqlc to bootstrap-e2e-minion-group-88l0 Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 682.317948ms (682.333508ms including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.703991017s (1.704005096s including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-8rcgp to bootstrap-e2e-master Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 719.810753ms (719.817586ms including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.259551358s (2.259558434s including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-f9lnv to bootstrap-e2e-minion-group-wqbh Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 798.75825ms (798.781902ms including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.750147538s (1.750158281s including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-qj6hk to bootstrap-e2e-minion-group-6721 Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 782.318965ms (782.337545ms including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.760006025s (1.760016285s including waiting) Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container metadata-proxy Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container prometheus-to-sd-exporter Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-8rcgp Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-2vqlc Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-f9lnv Jan 29 23:26:42.896: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-qj6hk Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-89n8r to bootstrap-e2e-minion-group-88l0 Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.9815654s (1.981573878s including waiting) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.814773422s (2.814784611s including waiting) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-89n8r Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-89n8r Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-krpjl to bootstrap-e2e-minion-group-wqbh Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.247580285s (1.247590338s including waiting) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 963.154622ms (963.162996ms including waiting) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": dial tcp 10.64.2.3:10250: connect: connection refused Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": dial tcp 10.64.2.3:10250: connect: connection refused Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Stopping container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Stopping container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": read tcp 10.64.2.1:43390->10.64.2.3:10250: read: connection reset by peer Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Failed: Error: failed to get sandbox container task: no running task found: task 5506be8bc4f89096ef778ab7fca9cfaf82b1876ebd96d8c3fd4f25d1ff33f02a not found: not found Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-krpjl_kube-system(8d0e7263-4537-45a1-934b-f1c130ff5bbc) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-krpjl Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metrics-server Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metrics-server-nanny Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Readiness probe failed: Get "https://10.64.2.9:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-krpjl Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 23:26:42.896: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-88l0 Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.130537452s (2.130545397s including waiting) Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container volume-snapshot-controller Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container volume-snapshot-controller Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container volume-snapshot-controller Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(bd0dd270-555b-4436-b406-8a283304f5bb) Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container volume-snapshot-controller Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container volume-snapshot-controller Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container volume-snapshot-controller Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(bd0dd270-555b-4436-b406-8a283304f5bb) Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:26:42.896: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 23:26:42.896 (62ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 23:26:42.896 Jan 29 23:26:42.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 23:26:42.942 (46ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 23:26:42.942 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 23:26:42.942 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 23:26:42.942 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 23:26:42.943 STEP: Collecting events from namespace "reboot-161". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 23:26:42.943 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 23:26:42.984 Jan 29 23:26:43.025: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 23:26:43.025: INFO: Jan 29 23:26:43.070: INFO: Logging node info for node bootstrap-e2e-master Jan 29 23:26:43.113: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 756bb90a-38ca-46e9-a519-4ade71c98037 3160 0 2023-01-29 23:02:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 23:02:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 23:02:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 23:02:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 23:22:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-gci-slow/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 23:02:22 +0000 UTC,LastTransitionTime:2023-01-29 23:02:22 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:02:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:02:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:02:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:02:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.1.140,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-gci-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-gci-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5c3129ce97a3f63b40e57e6cbe733c44,SystemUUID:5c3129ce-97a3-f63b-40e5-7e6cbe733c44,BootID:b96cb4ed-7649-46c7-9666-6fc4b47e90dd,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 23:26:43.113: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 23:26:43.161: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 23:27:00.198: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Jan 29 23:27:00.198: INFO: Logging node info for node bootstrap-e2e-minion-group-6721 Jan 29 23:27:00.241: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6721 7fc012f3-934c-45a9-9218-4db83f456958 3166 0 2023-01-29 23:02:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6721 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 23:02:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 23:16:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 23:16:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 23:22:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 23:22:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-gci-slow/us-west1-b/bootstrap-e2e-minion-group-6721,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 23:22:49 +0000 UTC,LastTransitionTime:2023-01-29 23:16:47 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 23:22:49 +0000 UTC,LastTransitionTime:2023-01-29 23:16:47 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 23:22:49 +0000 UTC,LastTransitionTime:2023-01-29 23:16:47 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 23:22:49 +0000 UTC,LastTransitionTime:2023-01-29 23:16:47 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 23:22:49 +0000 UTC,LastTransitionTime:2023-01-29 23:16:47 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 23:22:49 +0000 UTC,LastTransitionTime:2023-01-29 23:16:47 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 23:22:49 +0000 UTC,LastTransitionTime:2023-01-29 23:16:47 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 23:02:22 +0000 UTC,LastTransitionTime:2023-01-29 23:02:22 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:53 +0000 UTC,LastTransitionTime:2023-01-29 23:16:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:53 +0000 UTC,LastTransitionTime:2023-01-29 23:16:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:53 +0000 UTC,LastTransitionTime:2023-01-29 23:16:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 23:22:53 +0000 UTC,LastTransitionTime:2023-01-29 23:16:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.20.238,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6721.c.k8s-jkns-e2e-gce-gci-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6721.c.k8s-jkns-e2e-gce-gci-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8984c0af3840918eae4114a902d64191,SystemUUID:8984c0af-3840-918e-ae41-14a902d64191,BootID:453eaaa2-8c9a-45b6-91c7-cfc147f61b33,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 23:27:00.241: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6721 Jan 29 23:27:00.288: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6721 Jan 29 23:27:00.332: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6721: error trying to reach service: No agent available Jan 29 23:27:00.332: INFO: Logging node info for node bootstrap-e2e-minion-group-88l0 Jan 29 23:27:00.374: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-88l0 fab5e132-8ec0-42cf-9ad7-40e6250ae11b 3568 0 2023-01-29 23:02:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-88l0 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 23:02:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 23:16:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 23:16:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 23:26:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 23:26:59 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-gci-slow/us-west1-b/bootstrap-e2e-minion-group-88l0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 23:26:48 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 23:26:48 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 23:26:48 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 23:26:48 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 23:26:48 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 23:26:48 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 23:26:48 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 23:02:22 +0000 UTC,LastTransitionTime:2023-01-29 23:02:22 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 23:26:59 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 23:26:59 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 23:26:59 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 23:26:59 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.127.39.177,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-88l0.c.k8s-jkns-e2e-gce-gci-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-88l0.c.k8s-jkns-e2e-gce-gci-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0a690fed9ac751f6671eec4a21f76bad,SystemUUID:0a690fed-9ac7-51f6-671e-ec4a21f76bad,BootID:ec68c44f-1879-4b0a-a545-6219e5196494,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 23:27:00.374: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-88l0 Jan 29 23:27:00.422: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-88l0 Jan 29 23:27:00.465: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-88l0: error trying to reach service: No agent available Jan 29 23:27:00.465: INFO: Logging node info for node bootstrap-e2e-minion-group-wqbh Jan 29 23:27:00.508: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-wqbh bb57fe13-d7c7-4a05-8930-31b3fcc9decd 3167 0 2023-01-29 23:02:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-wqbh kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 23:02:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 23:16:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 23:16:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 23:22:52 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 23:22:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-gci-slow/us-west1-b/bootstrap-e2e-minion-group-wqbh,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 23:22:46 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 23:22:46 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 23:22:46 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 23:22:46 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 23:22:46 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 23:22:46 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 23:22:46 +0000 UTC,LastTransitionTime:2023-01-29 23:16:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 23:02:22 +0000 UTC,LastTransitionTime:2023-01-29 23:02:22 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 23:22:52 +0000 UTC,LastTransitionTime:2023-01-29 23:16:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.185.219.215,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-wqbh.c.k8s-jkns-e2e-gce-gci-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-wqbh.c.k8s-jkns-e2e-gce-gci-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:40d8e2caa5d7596dbe41807842d5c069,SystemUUID:40d8e2ca-a5d7-596d-be41-807842d5c069,BootID:6aa05efe-09f8-4f76-a55c-b374dc158bb7,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 23:27:00.508: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-wqbh Jan 29 23:27:00.556: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-wqbh Jan 29 23:27:00.600: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-wqbh: error trying to reach service: No agent available END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 23:27:00.6 (17.657s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 23:27:00.6 (17.658s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 23:27:00.6 STEP: Destroying namespace "reboot-161" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 23:27:00.6 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 23:27:00.644 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 23:27:00.645 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 23:27:00.645 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 23:15:05.998from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 23:12:52.199 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 23:12:52.199 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 23:12:52.199 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 23:12:52.199 Jan 29 23:12:52.199: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 23:12:52.2 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 23:12:52.344 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 23:12:52.427 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 23:12:52.508 (309ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 23:12:52.508 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 23:12:52.508 (0s) > Enter [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/29/23 23:12:52.508 Jan 29 23:12:52.604: INFO: Getting bootstrap-e2e-minion-group-88l0 Jan 29 23:12:52.604: INFO: Getting bootstrap-e2e-minion-group-wqbh Jan 29 23:12:52.604: INFO: Getting bootstrap-e2e-minion-group-6721 Jan 29 23:12:52.654: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-88l0 condition Ready to be true Jan 29 23:12:52.654: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-wqbh condition Ready to be true Jan 29 23:12:52.654: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6721 condition Ready to be true Jan 29 23:12:52.714: INFO: Node bootstrap-e2e-minion-group-6721 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-6721 metadata-proxy-v0.1-qj6hk] Jan 29 23:12:52.714: INFO: Node bootstrap-e2e-minion-group-88l0 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-fnk2j kube-proxy-bootstrap-e2e-minion-group-88l0 metadata-proxy-v0.1-2vqlc volume-snapshot-controller-0] Jan 29 23:12:52.714: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-6721 metadata-proxy-v0.1-qj6hk] Jan 29 23:12:52.714: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-fnk2j kube-proxy-bootstrap-e2e-minion-group-88l0 metadata-proxy-v0.1-2vqlc volume-snapshot-controller-0] Jan 29 23:12:52.714: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-qj6hk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.714: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.715: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-fnk2j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.715: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-88l0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.715: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2vqlc" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.715: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6721" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.728: INFO: Node bootstrap-e2e-minion-group-wqbh has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-wqbh metadata-proxy-v0.1-f9lnv] Jan 29 23:12:52.728: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-wqbh metadata-proxy-v0.1-f9lnv] Jan 29 23:12:52.728: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-f9lnv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.728: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-wqbh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.811: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wqbh": Phase="Running", Reason="", readiness=true. Elapsed: 82.48791ms Jan 29 23:12:52.811: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wqbh" satisfied condition "running and ready, or succeeded" Jan 29 23:12:52.812: INFO: Pod "metadata-proxy-v0.1-qj6hk": Phase="Running", Reason="", readiness=true. Elapsed: 97.591373ms Jan 29 23:12:52.812: INFO: Pod "metadata-proxy-v0.1-qj6hk" satisfied condition "running and ready, or succeeded" Jan 29 23:12:52.819: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Running", Reason="", readiness=true. Elapsed: 104.453601ms Jan 29 23:12:52.819: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 104.528987ms Jan 29 23:12:52.819: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j" satisfied condition "running and ready, or succeeded" Jan 29 23:12:52.819: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:12:52.821: INFO: Pod "metadata-proxy-v0.1-2vqlc": Phase="Running", Reason="", readiness=true. Elapsed: 106.672861ms Jan 29 23:12:52.821: INFO: Pod "metadata-proxy-v0.1-2vqlc" satisfied condition "running and ready, or succeeded" Jan 29 23:12:52.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-88l0": Phase="Running", Reason="", readiness=true. Elapsed: 106.694678ms Jan 29 23:12:52.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-88l0" satisfied condition "running and ready, or succeeded" Jan 29 23:12:52.821: INFO: Pod "metadata-proxy-v0.1-f9lnv": Phase="Running", Reason="", readiness=true. Elapsed: 93.184023ms Jan 29 23:12:52.821: INFO: Pod "metadata-proxy-v0.1-f9lnv" satisfied condition "running and ready, or succeeded" Jan 29 23:12:52.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6721": Phase="Running", Reason="", readiness=true. Elapsed: 106.740908ms Jan 29 23:12:52.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6721" satisfied condition "running and ready, or succeeded" Jan 29 23:12:52.821: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-wqbh metadata-proxy-v0.1-f9lnv] Jan 29 23:12:52.821: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-6721 metadata-proxy-v0.1-qj6hk] Jan 29 23:12:52.821: INFO: Getting external IP address for bootstrap-e2e-minion-group-6721 Jan 29 23:12:52.821: INFO: Getting external IP address for bootstrap-e2e-minion-group-wqbh Jan 29 23:12:52.821: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-6721(35.197.20.238:22) Jan 29 23:12:52.821: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-wqbh(35.185.219.215:22) Jan 29 23:12:53.351: INFO: ssh prow@35.197.20.238:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 23:12:53.351: INFO: ssh prow@35.197.20.238:22: stdout: "" Jan 29 23:12:53.351: INFO: ssh prow@35.197.20.238:22: stderr: "" Jan 29 23:12:53.351: INFO: ssh prow@35.197.20.238:22: exit code: 0 Jan 29 23:12:53.351: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6721 condition Ready to be false Jan 29 23:12:53.354: INFO: ssh prow@35.185.219.215:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 23:12:53.354: INFO: ssh prow@35.185.219.215:22: stdout: "" Jan 29 23:12:53.354: INFO: ssh prow@35.185.219.215:22: stderr: "" Jan 29 23:12:53.354: INFO: ssh prow@35.185.219.215:22: exit code: 0 Jan 29 23:12:53.354: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-wqbh condition Ready to be false Jan 29 23:12:53.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:12:53.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:12:54.863: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.148686987s Jan 29 23:12:54.863: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:12:55.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:12:55.439: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:12:56.862: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.147366029s Jan 29 23:12:56.862: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:12:57.515: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:12:57.515: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:12:58.861: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.146899746s Jan 29 23:12:58.861: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:12:59.560: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:12:59.560: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:00.861: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.146646495s Jan 29 23:13:00.861: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:13:01.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:01.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:02.877: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.162177574s Jan 29 23:13:02.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:13:03.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:03.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:04.861: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.14686463s Jan 29 23:13:04.861: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 23:13:04.861: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-fnk2j kube-proxy-bootstrap-e2e-minion-group-88l0 metadata-proxy-v0.1-2vqlc volume-snapshot-controller-0] Jan 29 23:13:04.861: INFO: Getting external IP address for bootstrap-e2e-minion-group-88l0 Jan 29 23:13:04.861: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-88l0(34.127.39.177:22) Jan 29 23:13:05.399: INFO: ssh prow@34.127.39.177:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 23:13:05.399: INFO: ssh prow@34.127.39.177:22: stdout: "" Jan 29 23:13:05.399: INFO: ssh prow@34.127.39.177:22: stderr: "" Jan 29 23:13:05.399: INFO: ssh prow@34.127.39.177:22: exit code: 0 Jan 29 23:13:05.399: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-88l0 condition Ready to be false Jan 29 23:13:05.441: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:05.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:05.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:07.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:07.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:07.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:09.527: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:09.785: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:09.785: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:11.572: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:11.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:11.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:13.618: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:13.874: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:13.874: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:15.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:15.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:15.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:17.706: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:17.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:17.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:19.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:20.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:20.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:21.795: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:22.057: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:22.059: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:23.838: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:24.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:24.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:25.886: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:26.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:26.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:27.930: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:28.204: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:28.204: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:29.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:30.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:30.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:32.017: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:32.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:32.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:34.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:34.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:34.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:36.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:36.382: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:36.382: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:38.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:38.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:38.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:40.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:40.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:40.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:42.242: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:42.516: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:42.516: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:44.294: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:44.560: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:44.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:46.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:46.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:46.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:48.381: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:48.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:48.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:50.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:50.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:50.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:52.467: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:52.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:52.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:54.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:54.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:54.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:56.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:56.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:56.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:58.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:58.876: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:58.876: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:00.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:00.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:00.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:02.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:02.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:02.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:04.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:05.006: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:05.008: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:06.773: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:07.055: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:07.066: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:08.816: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:09.099: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:09.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:10.859: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:11.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:11.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:12.902: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:13.185: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:13.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:14.946: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:15.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:15.240: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:16.990: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:17.272: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:17.284: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:19.033: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:19.314: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:19.327: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:21.077: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:21.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:21.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:23.122: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:23.401: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:23.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:25.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:25.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:25.469: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:27.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:27.487: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:27.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:29.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:29.531: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:29.556: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:31.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:31.574: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:31.601: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:33.342: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:33.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:33.645: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:35.386: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:35.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:35.687: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:37.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:37.704: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:37.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:39.472: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:39.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:39.773: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:41.516: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:41.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:41.816: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:43.559: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:43.836: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:43.859: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:45.601: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:45.878: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:45.902: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:47.648: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:47.923: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:47.947: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:49.691: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:49.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:49.991: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:51.736: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:52.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:52.067: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:53.780: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:54.009: INFO: Node bootstrap-e2e-minion-group-6721 didn't reach desired Ready condition status (false) within 2m0s Jan 29 23:14:54.067: INFO: Node bootstrap-e2e-minion-group-wqbh didn't reach desired Ready condition status (false) within 2m0s Jan 29 23:14:55.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:57.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:59.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:15:01.954: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:15:03.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:15:05.998: INFO: Node bootstrap-e2e-minion-group-88l0 didn't reach desired Ready condition status (false) within 2m0s Jan 29 23:15:05.998: INFO: Node bootstrap-e2e-minion-group-6721 failed reboot test. Jan 29 23:15:05.998: INFO: Node bootstrap-e2e-minion-group-88l0 failed reboot test. Jan 29 23:15:05.998: INFO: Node bootstrap-e2e-minion-group-wqbh failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 23:15:05.998 < Exit [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/29/23 23:15:05.998 (2m13.49s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 23:15:05.998 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 23:15:05.998 Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-4q7fd to bootstrap-e2e-minion-group-88l0 Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.847441403s (3.847450323s including waiting) Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Readiness probe failed: Get "http://10.64.1.5:8181/ready": dial tcp 10.64.1.5:8181: connect: connection refused Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Readiness probe failed: Get "http://10.64.1.14:8181/ready": dial tcp 10.64.1.14:8181: connect: connection refused Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-4q7fd_kube-system(4a425660-a466-48ab-85da-437da7e618a6) Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Liveness probe failed: Get "http://10.64.1.20:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-4q7fd Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-4q7fd Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-gs5tb to bootstrap-e2e-minion-group-6721 Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 987.339ms (987.349102ms including waiting) Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Readiness probe failed: Get "http://10.64.3.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Liveness probe failed: Get "http://10.64.3.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Killing: Container coredns failed liveness probe, will be restarted Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-4q7fd Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-gs5tb Jan 29 23:15:06.050: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 23:15:06.050: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 23:15:06.050: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 23:15:06.050: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 23:15:06.050: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 23:15:06.050: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 23:15:06.050: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 23:15:06.050: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 23:15:06.050: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 23:15:06.050: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 23:15:06.050: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1c62f became leader Jan 29 23:15:06.050: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_2342f became leader Jan 29 23:15:06.050: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_7cda5 became leader Jan 29 23:15:06.050: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ad7db became leader Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-29t5v to bootstrap-e2e-minion-group-6721 Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 625.689877ms (625.698369ms including waiting) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Killing: Stopping container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-29t5v_kube-system(676a6fef-fae9-419c-967c-3c4cabf3b4d0) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-9qvb2 to bootstrap-e2e-minion-group-88l0 Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 3.147175294s (3.147183862s including waiting) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9qvb2_kube-system(94c1b8a4-d0c2-46a0-bfef-385846a587df) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Liveness probe failed: Get "http://10.64.1.19:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9qvb2_kube-system(94c1b8a4-d0c2-46a0-bfef-385846a587df) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-gx2gz to bootstrap-e2e-minion-group-wqbh Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 588.144499ms (588.157829ms including waiting) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-9qvb2 Jan 29 23:15:06.050: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-gx2gz Jan 29 23:15:06.050: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-29t5v Jan 29 23:15:06.050: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 23:15:06.050: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 23:15:06.050: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 23:15:06.050: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 23:15:06.050: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 23:15:06.050: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 23:15:06.050: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 23:15:06.050: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 23:15:06.050: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 23:15:06.050: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 23:15:06.050: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 23:15:06.050: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.050: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 23:15:06.050: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 23:15:06.050: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused Jan 29 23:15:06.050: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 23:15:06.050: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 23:15:06.050: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_1f0b58b9-e3af-40cd-bbf1-df962a1a7d66 became leader Jan 29 23:15:06.050: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_7e71952d-71c6-4e33-aa88-f17783879913 became leader Jan 29 23:15:06.050: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_0cd69aca-881b-4381-9d05-c8ecaf96e1ad became leader Jan 29 23:15:06.050: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c4f828ec-1e3d-476b-946c-47cb9fad7392 became leader Jan 29 23:15:06.050: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_73715983-9a20-46cb-94e2-8a8288f2370d became leader Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-fnk2j to bootstrap-e2e-minion-group-88l0 Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.672896561s (3.672911757s including waiting) Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container autoscaler Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container autoscaler Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container autoscaler Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fnk2j_kube-system(4c651565-44f7-46bb-aab2-f09040397115) Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-fnk2j Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-fnk2j Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container autoscaler Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container autoscaler Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-fnk2j Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Killing: Stopping container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6721_kube-system(8c98108cbd9aa73159be1e4bea9c87b5) Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Killing: Stopping container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6721_kube-system(8c98108cbd9aa73159be1e4bea9c87b5) Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container kube-proxy Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container kube-proxy Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container kube-proxy Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container kube-proxy Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Stopping container kube-proxy Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-wqbh_kube-system(f7bfadae6ed5c61f5cb8ce9584aa18a1) Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container kube-proxy Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container kube-proxy Jan 29 23:15:06.051: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.051: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 23:15:06.051: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 23:15:06.051: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 23:15:06.051: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 23:15:06.051: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_b314dba1-2f34-450d-a940-e032ea959007 became leader Jan 29 23:15:06.051: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e150da42-06fa-4222-afb3-02a801863fea became leader Jan 29 23:15:06.051: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ebce7161-2751-44a8-921a-1bb0c61c7457 became leader Jan 29 23:15:06.051: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c0e4b771-3ff3-4c12-aabe-26c80f1386d0 became leader Jan 29 23:15:06.051: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_253cc4ed-29f8-49a2-b15e-60323a46a8b4 became leader Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-6rtzm to bootstrap-e2e-minion-group-88l0 Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 2.563762473s (2.56377916s including waiting) Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container default-http-backend Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container default-http-backend Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Liveness probe failed: Get "http://10.64.1.4:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-6rtzm Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-6rtzm Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99-6rtzm: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container default-http-backend Jan 29 23:15:06.051: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-6rtzm Jan 29 23:15:06.051: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 23:15:06.051: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 23:15:06.051: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 23:15:06.051: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 23:15:06.051: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 23:15:06.051: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 23:15:06.051: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-2vqlc to bootstrap-e2e-minion-group-88l0 Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 682.317948ms (682.333508ms including waiting) Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metadata-proxy Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metadata-proxy Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.703991017s (1.704005096s including waiting) Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container prometheus-to-sd-exporter Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container prometheus-to-sd-exporter Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metadata-proxy Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metadata-proxy Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container prometheus-to-sd-exporter Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-2vqlc: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container prometheus-to-sd-exporter Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-8rcgp: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-8rcgp to bootstrap-e2e-master Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 719.810753ms (719.817586ms including waiting) Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.259551358s (2.259558434s including waiting) Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-8rcgp: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-f9lnv to bootstrap-e2e-minion-group-wqbh Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 798.75825ms (798.781902ms including waiting) Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metadata-proxy Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metadata-proxy Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.750147538s (1.750158281s including waiting) Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container prometheus-to-sd-exporter Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container prometheus-to-sd-exporter Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metadata-proxy Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metadata-proxy Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container prometheus-to-sd-exporter Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-f9lnv: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container prometheus-to-sd-exporter Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-qj6hk to bootstrap-e2e-minion-group-6721 Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 782.318965ms (782.337545ms including waiting) Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container metadata-proxy Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container metadata-proxy Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.760006025s (1.760016285s including waiting) Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container prometheus-to-sd-exporter Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container prometheus-to-sd-exporter Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container metadata-proxy Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container metadata-proxy Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container prometheus-to-sd-exporter Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1-qj6hk: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container prometheus-to-sd-exporter Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-8rcgp Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-2vqlc Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-f9lnv Jan 29 23:15:06.051: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-qj6hk Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-89n8r to bootstrap-e2e-minion-group-88l0 Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.9815654s (1.981573878s including waiting) Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metrics-server Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metrics-server Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.814773422s (2.814784611s including waiting) Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container metrics-server-nanny Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container metrics-server-nanny Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container metrics-server Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container metrics-server-nanny Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c-89n8r: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-89n8r Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-89n8r Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-krpjl to bootstrap-e2e-minion-group-wqbh Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.247580285s (1.247590338s including waiting) Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metrics-server Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metrics-server Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 963.154622ms (963.162996ms including waiting) Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metrics-server-nanny Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metrics-server-nanny Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": dial tcp 10.64.2.3:10250: connect: connection refused Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": dial tcp 10.64.2.3:10250: connect: connection refused Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Stopping container metrics-server Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Stopping container metrics-server-nanny Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": read tcp 10.64.2.1:43390->10.64.2.3:10250: read: connection reset by peer Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Failed: Error: failed to get sandbox container task: no running task found: task 5506be8bc4f89096ef778ab7fca9cfaf82b1876ebd96d8c3fd4f25d1ff33f02a not found: not found Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-krpjl_kube-system(8d0e7263-4537-45a1-934b-f1c130ff5bbc) Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-krpjl Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metrics-server Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metrics-server Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container metrics-server-nanny Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container metrics-server-nanny Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9-krpjl: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Readiness probe failed: Get "https://10.64.2.9:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-krpjl Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 23:15:06.051: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-88l0 Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.130537452s (2.130545397s including waiting) Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container volume-snapshot-controller Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container volume-snapshot-controller Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container volume-snapshot-controller Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(bd0dd270-555b-4436-b406-8a283304f5bb) Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container volume-snapshot-controller Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container volume-snapshot-controller Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container volume-snapshot-controller Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(bd0dd270-555b-4436-b406-8a283304f5bb) Jan 29 23:15:06.051: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 23:15:06.051 (52ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 23:15:06.051 Jan 29 23:15:06.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 23:15:06.094 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 23:15:06.094 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 23:15:06.094 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 23:15:06.094 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 23:15:06.094 STEP: Collecting events from namespace "reboot-2480". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 23:15:06.095 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 23:15:06.137 Jan 29 23:15:06.178: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 23:15:06.178: INFO: Jan 29 23:15:06.222: INFO: Logging node info for node bootstrap-e2e-master Jan 29 23:15:06.264: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 756bb90a-38ca-46e9-a519-4ade71c98037 2003 0 2023-01-29 23:02:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 23:02:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 23:02:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 23:02:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 23:12:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-gci-slow/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 23:02:22 +0000 UTC,LastTransitionTime:2023-01-29 23:02:22 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 23:12:40 +0000 UTC,LastTransitionTime:2023-01-29 23:02:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 23:12:40 +0000 UTC,LastTransitionTime:2023-01-29 23:02:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 23:12:40 +0000 UTC,LastTransitionTime:2023-01-29 23:02:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 23:12:40 +0000 UTC,LastTransitionTime:2023-01-29 23:02:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.230.1.140,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-gci-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-gci-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5c3129ce97a3f63b40e57e6cbe733c44,SystemUUID:5c3129ce-97a3-f63b-40e5-7e6cbe733c44,BootID:b96cb4ed-7649-46c7-9666-6fc4b47e90dd,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 23:15:06.265: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 23:15:06.311: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 23:15:06.356: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Jan 29 23:15:06.356: INFO: Logging node info for node bootstrap-e2e-minion-group-6721 Jan 29 23:15:06.398: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6721 7fc012f3-934c-45a9-9218-4db83f456958 2227 0 2023-01-29 23:02:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6721 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 23:02:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 23:10:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 23:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 23:11:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 23:14:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-gci-slow/us-west1-b/bootstrap-e2e-minion-group-6721,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 23:14:13 +0000 UTC,LastTransitionTime:2023-01-29 23:14:12 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 23:14:13 +0000 UTC,LastTransitionTime:2023-01-29 23:14:12 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 23:14:13 +0000 UTC,LastTransitionTime:2023-01-29 23:14:12 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 23:14:13 +0000 UTC,LastTransitionTime:2023-01-29 23:14:12 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 23:14:13 +0000 UTC,LastTransitionTime:2023-01-29 23:14:12 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 23:14:13 +0000 UTC,LastTransitionTime:2023-01-29 23:14:12 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 23:14:13 +0000 UTC,LastTransitionTime:2023-01-29 23:14:12 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 23:02:22 +0000 UTC,LastTransitionTime:2023-01-29 23:02:22 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 23:11:12 +0000 UTC,LastTransitionTime:2023-01-29 23:11:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 23:11:12 +0000 UTC,LastTransitionTime:2023-01-29 23:11:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 23:11:12 +0000 UTC,LastTransitionTime:2023-01-29 23:11:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 23:11:12 +0000 UTC,LastTransitionTime:2023-01-29 23:11:12 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.20.238,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6721.c.k8s-jkns-e2e-gce-gci-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6721.c.k8s-jkns-e2e-gce-gci-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8984c0af3840918eae4114a902d64191,SystemUUID:8984c0af-3840-918e-ae41-14a902d64191,BootID:8d9b7c99-ccc9-4555-8071-1dcf69b96ec3,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 23:15:06.398: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6721 Jan 29 23:15:06.467: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6721 Jan 29 23:15:06.513: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6721: error trying to reach service: No agent available Jan 29 23:15:06.513: INFO: Logging node info for node bootstrap-e2e-minion-group-88l0 Jan 29 23:15:06.557: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-88l0 fab5e132-8ec0-42cf-9ad7-40e6250ae11b 2243 0 2023-01-29 23:02:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-88l0 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 23:02:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 23:10:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 23:12:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 23:12:44 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 23:14:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-gci-slow/us-west1-b/bootstrap-e2e-minion-group-88l0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 23:14:24 +0000 UTC,LastTransitionTime:2023-01-29 23:14:23 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 23:14:24 +0000 UTC,LastTransitionTime:2023-01-29 23:14:23 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 23:14:24 +0000 UTC,LastTransitionTime:2023-01-29 23:14:23 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 23:14:24 +0000 UTC,LastTransitionTime:2023-01-29 23:14:23 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 23:14:24 +0000 UTC,LastTransitionTime:2023-01-29 23:14:23 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 23:14:24 +0000 UTC,LastTransitionTime:2023-01-29 23:14:23 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 23:14:24 +0000 UTC,LastTransitionTime:2023-01-29 23:14:23 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 23:02:22 +0000 UTC,LastTransitionTime:2023-01-29 23:02:22 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 23:12:44 +0000 UTC,LastTransitionTime:2023-01-29 23:12:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 23:12:44 +0000 UTC,LastTransitionTime:2023-01-29 23:12:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 23:12:44 +0000 UTC,LastTransitionTime:2023-01-29 23:12:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 23:12:44 +0000 UTC,LastTransitionTime:2023-01-29 23:12:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.127.39.177,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-88l0.c.k8s-jkns-e2e-gce-gci-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-88l0.c.k8s-jkns-e2e-gce-gci-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0a690fed9ac751f6671eec4a21f76bad,SystemUUID:0a690fed-9ac7-51f6-671e-ec4a21f76bad,BootID:42e88585-d0e8-4c20-81c9-8d878eca4762,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 23:15:06.557: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-88l0 Jan 29 23:15:06.611: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-88l0 Jan 29 23:15:06.658: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-88l0: error trying to reach service: No agent available Jan 29 23:15:06.658: INFO: Logging node info for node bootstrap-e2e-minion-group-wqbh Jan 29 23:15:06.701: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-wqbh bb57fe13-d7c7-4a05-8930-31b3fcc9decd 2224 0 2023-01-29 23:02:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-wqbh kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 23:02:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 23:10:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 23:12:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 23:12:44 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 23:14:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-gci-slow/us-west1-b/bootstrap-e2e-minion-group-wqbh,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 23:14:13 +0000 UTC,LastTransitionTime:2023-01-29 23:14:12 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 23:14:13 +0000 UTC,LastTransitionTime:2023-01-29 23:14:12 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 23:14:13 +0000 UTC,LastTransitionTime:2023-01-29 23:14:12 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 23:14:13 +0000 UTC,LastTransitionTime:2023-01-29 23:14:12 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 23:14:13 +0000 UTC,LastTransitionTime:2023-01-29 23:14:12 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 23:14:13 +0000 UTC,LastTransitionTime:2023-01-29 23:14:12 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 23:14:13 +0000 UTC,LastTransitionTime:2023-01-29 23:14:12 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 23:02:22 +0000 UTC,LastTransitionTime:2023-01-29 23:02:22 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 23:12:44 +0000 UTC,LastTransitionTime:2023-01-29 23:12:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 23:12:44 +0000 UTC,LastTransitionTime:2023-01-29 23:12:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 23:12:44 +0000 UTC,LastTransitionTime:2023-01-29 23:12:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 23:12:44 +0000 UTC,LastTransitionTime:2023-01-29 23:12:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.185.219.215,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-wqbh.c.k8s-jkns-e2e-gce-gci-slow.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-wqbh.c.k8s-jkns-e2e-gce-gci-slow.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:40d8e2caa5d7596dbe41807842d5c069,SystemUUID:40d8e2ca-a5d7-596d-be41-807842d5c069,BootID:c22c774c-a95f-4696-862b-8e2264758f6a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 23:15:06.701: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-wqbh Jan 29 23:15:06.749: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-wqbh Jan 29 23:15:06.797: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-wqbh: error trying to reach service: No agent available END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 23:15:06.797 (702ms) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 23:15:06.797 (702ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 23:15:06.797 STEP: Destroying namespace "reboot-2480" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 23:15:06.797 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 23:15:06.845 (48ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 23:15:06.845 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 23:15:06.845 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 23:15:05.998
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 23:12:52.199 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 23:12:52.199 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 23:12:52.199 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 23:12:52.199 Jan 29 23:12:52.199: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 23:12:52.2 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 23:12:52.344 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 23:12:52.427 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 23:12:52.508 (309ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 23:12:52.508 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 23:12:52.508 (0s) > Enter [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/29/23 23:12:52.508 Jan 29 23:12:52.604: INFO: Getting bootstrap-e2e-minion-group-88l0 Jan 29 23:12:52.604: INFO: Getting bootstrap-e2e-minion-group-wqbh Jan 29 23:12:52.604: INFO: Getting bootstrap-e2e-minion-group-6721 Jan 29 23:12:52.654: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-88l0 condition Ready to be true Jan 29 23:12:52.654: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-wqbh condition Ready to be true Jan 29 23:12:52.654: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6721 condition Ready to be true Jan 29 23:12:52.714: INFO: Node bootstrap-e2e-minion-group-6721 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-6721 metadata-proxy-v0.1-qj6hk] Jan 29 23:12:52.714: INFO: Node bootstrap-e2e-minion-group-88l0 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-fnk2j kube-proxy-bootstrap-e2e-minion-group-88l0 metadata-proxy-v0.1-2vqlc volume-snapshot-controller-0] Jan 29 23:12:52.714: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-6721 metadata-proxy-v0.1-qj6hk] Jan 29 23:12:52.714: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-fnk2j kube-proxy-bootstrap-e2e-minion-group-88l0 metadata-proxy-v0.1-2vqlc volume-snapshot-controller-0] Jan 29 23:12:52.714: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-qj6hk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.714: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.715: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-fnk2j" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.715: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-88l0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.715: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-2vqlc" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.715: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6721" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.728: INFO: Node bootstrap-e2e-minion-group-wqbh has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-wqbh metadata-proxy-v0.1-f9lnv] Jan 29 23:12:52.728: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-wqbh metadata-proxy-v0.1-f9lnv] Jan 29 23:12:52.728: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-f9lnv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.728: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-wqbh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 23:12:52.811: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wqbh": Phase="Running", Reason="", readiness=true. Elapsed: 82.48791ms Jan 29 23:12:52.811: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wqbh" satisfied condition "running and ready, or succeeded" Jan 29 23:12:52.812: INFO: Pod "metadata-proxy-v0.1-qj6hk": Phase="Running", Reason="", readiness=true. Elapsed: 97.591373ms Jan 29 23:12:52.812: INFO: Pod "metadata-proxy-v0.1-qj6hk" satisfied condition "running and ready, or succeeded" Jan 29 23:12:52.819: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j": Phase="Running", Reason="", readiness=true. Elapsed: 104.453601ms Jan 29 23:12:52.819: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 104.528987ms Jan 29 23:12:52.819: INFO: Pod "kube-dns-autoscaler-5f6455f985-fnk2j" satisfied condition "running and ready, or succeeded" Jan 29 23:12:52.819: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:12:52.821: INFO: Pod "metadata-proxy-v0.1-2vqlc": Phase="Running", Reason="", readiness=true. Elapsed: 106.672861ms Jan 29 23:12:52.821: INFO: Pod "metadata-proxy-v0.1-2vqlc" satisfied condition "running and ready, or succeeded" Jan 29 23:12:52.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-88l0": Phase="Running", Reason="", readiness=true. Elapsed: 106.694678ms Jan 29 23:12:52.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-88l0" satisfied condition "running and ready, or succeeded" Jan 29 23:12:52.821: INFO: Pod "metadata-proxy-v0.1-f9lnv": Phase="Running", Reason="", readiness=true. Elapsed: 93.184023ms Jan 29 23:12:52.821: INFO: Pod "metadata-proxy-v0.1-f9lnv" satisfied condition "running and ready, or succeeded" Jan 29 23:12:52.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6721": Phase="Running", Reason="", readiness=true. Elapsed: 106.740908ms Jan 29 23:12:52.821: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6721" satisfied condition "running and ready, or succeeded" Jan 29 23:12:52.821: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-wqbh metadata-proxy-v0.1-f9lnv] Jan 29 23:12:52.821: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-6721 metadata-proxy-v0.1-qj6hk] Jan 29 23:12:52.821: INFO: Getting external IP address for bootstrap-e2e-minion-group-6721 Jan 29 23:12:52.821: INFO: Getting external IP address for bootstrap-e2e-minion-group-wqbh Jan 29 23:12:52.821: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-6721(35.197.20.238:22) Jan 29 23:12:52.821: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-wqbh(35.185.219.215:22) Jan 29 23:12:53.351: INFO: ssh prow@35.197.20.238:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 23:12:53.351: INFO: ssh prow@35.197.20.238:22: stdout: "" Jan 29 23:12:53.351: INFO: ssh prow@35.197.20.238:22: stderr: "" Jan 29 23:12:53.351: INFO: ssh prow@35.197.20.238:22: exit code: 0 Jan 29 23:12:53.351: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6721 condition Ready to be false Jan 29 23:12:53.354: INFO: ssh prow@35.185.219.215:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 23:12:53.354: INFO: ssh prow@35.185.219.215:22: stdout: "" Jan 29 23:12:53.354: INFO: ssh prow@35.185.219.215:22: stderr: "" Jan 29 23:12:53.354: INFO: ssh prow@35.185.219.215:22: exit code: 0 Jan 29 23:12:53.354: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-wqbh condition Ready to be false Jan 29 23:12:53.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:12:53.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:12:54.863: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.148686987s Jan 29 23:12:54.863: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:12:55.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:12:55.439: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:12:56.862: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.147366029s Jan 29 23:12:56.862: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:12:57.515: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:12:57.515: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:12:58.861: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.146899746s Jan 29 23:12:58.861: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:12:59.560: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:12:59.560: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:00.861: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.146646495s Jan 29 23:13:00.861: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:13:01.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:01.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:02.877: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.162177574s Jan 29 23:13:02.877: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-88l0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:12:49 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 23:02:22 +0000 UTC }] Jan 29 23:13:03.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:03.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:04.861: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.14686463s Jan 29 23:13:04.861: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 23:13:04.861: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-fnk2j kube-proxy-bootstrap-e2e-minion-group-88l0 metadata-proxy-v0.1-2vqlc volume-snapshot-controller-0] Jan 29 23:13:04.861: INFO: Getting external IP address for bootstrap-e2e-minion-group-88l0 Jan 29 23:13:04.861: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-88l0(34.127.39.177:22) Jan 29 23:13:05.399: INFO: ssh prow@34.127.39.177:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 23:13:05.399: INFO: ssh prow@34.127.39.177:22: stdout: "" Jan 29 23:13:05.399: INFO: ssh prow@34.127.39.177:22: stderr: "" Jan 29 23:13:05.399: INFO: ssh prow@34.127.39.177:22: exit code: 0 Jan 29 23:13:05.399: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-88l0 condition Ready to be false Jan 29 23:13:05.441: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:05.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:05.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:07.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:07.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:07.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:09.527: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:09.785: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:09.785: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:11.572: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:11.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:11.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:13.618: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:13.874: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:13.874: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:15.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:15.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:15.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:17.706: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:17.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:17.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:19.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:20.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:20.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:21.795: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:22.057: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:22.059: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:23.838: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:24.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:24.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:25.886: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:26.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:26.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:27.930: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:28.204: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:28.204: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:29.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:30.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:30.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:32.017: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:32.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:32.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:34.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:34.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:34.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:36.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:36.382: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:36.382: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:38.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:38.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:38.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:40.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:40.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:40.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:42.242: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:42.516: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:42.516: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:44.294: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:44.560: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:44.561: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:46.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:46.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:46.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:48.381: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:48.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:48.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:50.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:50.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:50.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:52.467: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:52.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:52.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:54.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:54.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:54.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:56.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:56.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:56.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:58.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:58.876: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:13:58.876: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:00.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:00.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:00.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:02.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:02.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:02.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:04.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:05.006: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:05.008: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:06.773: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:07.055: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:07.066: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:08.816: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:09.099: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:09.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:10.859: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:11.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:11.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:12.902: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:13.185: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:13.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:14.946: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:15.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:15.240: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:16.990: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:17.272: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:17.284: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:19.033: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:19.314: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:19.327: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:21.077: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:21.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:21.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:23.122: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:23.401: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:23.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:25.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:25.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:25.469: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:27.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:27.487: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:27.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:29.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:29.531: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:29.556: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:31.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:31.574: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:31.601: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:33.342: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:33.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:33.645: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:35.386: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:35.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:35.687: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:37.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:37.704: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:37.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:39.472: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:39.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:39.773: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:41.516: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:41.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:41.816: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:43.559: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:43.836: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:43.859: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:45.601: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:45.878: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:45.902: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:47.648: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:47.923: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:47.947: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:49.691: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:49.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:49.991: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:51.736: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:52.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-6721 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:52.067: INFO: Condition Ready of node bootstrap-e2e-minion-group-wqbh is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:53.780: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:54.009: INFO: Node bootstrap-e2e-minion-group-6721 didn't reach desired Ready condition status (false) within 2m0s Jan 29 23:14:54.067: INFO: Node bootstrap-e2e-minion-group-wqbh didn't reach desired Ready condition status (false) within 2m0s Jan 29 23:14:55.824: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:57.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:14:59.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:15:01.954: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:15:03.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-88l0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 23:15:05.998: INFO: Node bootstrap-e2e-minion-group-88l0 didn't reach desired Ready condition status (false) within 2m0s Jan 29 23:15:05.998: INFO: Node bootstrap-e2e-minion-group-6721 failed reboot test. Jan 29 23:15:05.998: INFO: Node bootstrap-e2e-minion-group-88l0 failed reboot test. Jan 29 23:15:05.998: INFO: Node bootstrap-e2e-minion-group-wqbh failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 23:15:05.998 < Exit [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/29/23 23:15:05.998 (2m13.49s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 23:15:05.998 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 23:15:05.998 Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-4q7fd to bootstrap-e2e-minion-group-88l0 Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.847441403s (3.847450323s including waiting) Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Readiness probe failed: Get "http://10.64.1.5:8181/ready": dial tcp 10.64.1.5:8181: connect: connection refused Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Readiness probe failed: Get "http://10.64.1.14:8181/ready": dial tcp 10.64.1.14:8181: connect: connection refused Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-4q7fd_kube-system(4a425660-a466-48ab-85da-437da7e618a6) Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Liveness probe failed: Get "http://10.64.1.20:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-4q7fd Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-4q7fd Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-4q7fd: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-gs5tb to bootstrap-e2e-minion-group-6721 Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 987.339ms (987.349102ms including waiting) Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Readiness probe failed: Get "http://10.64.3.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Liveness probe failed: Get "http://10.64.3.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Killing: Container coredns failed liveness probe, will be restarted Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f-gs5tb: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container coredns Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-4q7fd Jan 29 23:15:06.050: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-gs5tb Jan 29 23:15:06.050: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 23:15:06.050: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 23:15:06.050: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 23:15:06.050: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 23:15:06.050: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 23:15:06.050: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 23:15:06.050: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 23:15:06.050: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 23:15:06.050: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 23:15:06.050: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 23:15:06.050: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1c62f became leader Jan 29 23:15:06.050: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_2342f became leader Jan 29 23:15:06.050: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_7cda5 became leader Jan 29 23:15:06.050: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ad7db became leader Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-29t5v to bootstrap-e2e-minion-group-6721 Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 625.689877ms (625.698369ms including waiting) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Killing: Stopping container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-29t5v_kube-system(676a6fef-fae9-419c-967c-3c4cabf3b4d0) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-29t5v: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-9qvb2 to bootstrap-e2e-minion-group-88l0 Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 3.147175294s (3.147183862s including waiting) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9qvb2_kube-system(94c1b8a4-d0c2-46a0-bfef-385846a587df) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Unhealthy: Liveness probe failed: Get "http://10.64.1.19:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-9qvb2: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9qvb2_kube-system(94c1b8a4-d0c2-46a0-bfef-385846a587df) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-gx2gz to bootstrap-e2e-minion-group-wqbh Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 588.144499ms (588.157829ms including waiting) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent-gx2gz: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container konnectivity-agent Jan 29 23:15:06.050: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-9qvb2 Jan 29 23:15:06.050: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-gx2gz Jan 29 23:15:06.050: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-29t5v Jan 29 23:15:06.050: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 23:15:06.050: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 23:15:06.050: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 23:15:06.050: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 23:15:06.050: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 23:15:06.050: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 23:15:06.050: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 23:15:06.050: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 23:15:06.050: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 23:15:06.050: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 23:15:06.050: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 23:15:06.050: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.050: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 23:15:06.050: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 23:15:06.050: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused Jan 29 23:15:06.050: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 23:15:06.050: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 23:15:06.050: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_1f0b58b9-e3af-40cd-bbf1-df962a1a7d66 became leader Jan 29 23:15:06.050: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_7e71952d-71c6-4e33-aa88-f17783879913 became leader Jan 29 23:15:06.050: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_0cd69aca-881b-4381-9d05-c8ecaf96e1ad became leader Jan 29 23:15:06.050: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c4f828ec-1e3d-476b-946c-47cb9fad7392 became leader Jan 29 23:15:06.050: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_73715983-9a20-46cb-94e2-8a8288f2370d became leader Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-fnk2j to bootstrap-e2e-minion-group-88l0 Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.672896561s (3.672911757s including waiting) Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container autoscaler Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container autoscaler Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container autoscaler Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fnk2j_kube-system(4c651565-44f7-46bb-aab2-f09040397115) Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-fnk2j Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-fnk2j Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container autoscaler Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985-fnk2j: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container autoscaler Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-fnk2j Jan 29 23:15:06.050: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Killing: Stopping container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6721_kube-system(8c98108cbd9aa73159be1e4bea9c87b5) Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Created: Created container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Started: Started container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} Killing: Stopping container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6721: {kubelet bootstrap-e2e-minion-group-6721} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6721_kube-system(8c98108cbd9aa73159be1e4bea9c87b5) Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Killing: Stopping container kube-proxy Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.050: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Created: Created container kube-proxy Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-88l0: {kubelet bootstrap-e2e-minion-group-88l0} Started: Started container kube-proxy Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container kube-proxy Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container kube-proxy Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Killing: Stopping container kube-proxy Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-wqbh_kube-system(f7bfadae6ed5c61f5cb8ce9584aa18a1) Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {node-controller } NodeNotReady: Node is not ready Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Created: Created container kube-proxy Jan 29 23:15:06.051: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wqbh: {kubelet bootstrap-e2e-minion-group-wqbh} Started: Started container kube-proxy Jan 29 23:15:06.051: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 23:15:06.051: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 23:15:06.051: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 23:15:06.051: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 23:15:06.051: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 23:15:06.051: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 23:15:06.051: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_b314dba1-2f34-450d-a940-e032ea959007 bec