go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 18:01:20.94 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:00:50.819 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:00:50.819 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:00:50.819 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 18:00:50.819 Jan 28 18:00:50.819: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 18:00:50.82 Jan 28 18:00:50.860: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:52.899: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:54.901: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:56.899: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:58.901: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:00.900: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:02.900: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:04.900: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:06.900: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:08.899: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:10.899: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:12.900: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:14.899: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:16.901: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:18.901: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:20.901: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:20.940: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:20.940: INFO: Unexpected error: <*errors.errorString | 0xc000205c80>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 18:01:20.94 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:01:20.94 (30.122s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:01:20.94 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 18:01:20.94 Jan 28 18:01:20.980: INFO: Unexpected error: <*url.Error | 0xc004492030>: { Op: "Get", URL: "https://35.247.33.232/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc0045ee000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00446e420>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 247, 33, 232], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004232000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.247.33.232/api/v1/namespaces/kube-system/events": dial tcp 35.247.33.232:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 18:01:20.98 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:01:20.98 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:01:20.98 Jan 28 18:01:20.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:01:21.02 (39ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:01:21.02 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:01:21.02 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:01:21.02 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:01:21.02 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:01:21.02 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:01:21.02 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:01:21.02 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:01:21.02 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 18:01:20.94 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:00:50.819 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:00:50.819 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:00:50.819 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 18:00:50.819 Jan 28 18:00:50.819: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 18:00:50.82 Jan 28 18:00:50.860: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:52.899: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:54.901: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:56.899: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:58.901: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:00.900: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:02.900: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:04.900: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:06.900: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:08.899: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:10.899: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:12.900: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:14.899: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:16.901: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:18.901: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:20.901: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:20.940: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:20.940: INFO: Unexpected error: <*errors.errorString | 0xc000205c80>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 18:01:20.94 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:01:20.94 (30.122s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:01:20.94 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 18:01:20.94 Jan 28 18:01:20.980: INFO: Unexpected error: <*url.Error | 0xc004492030>: { Op: "Get", URL: "https://35.247.33.232/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc0045ee000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00446e420>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 247, 33, 232], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004232000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.247.33.232/api/v1/namespaces/kube-system/events": dial tcp 35.247.33.232:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 18:01:20.98 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:01:20.98 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:01:20.98 Jan 28 18:01:20.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:01:21.02 (39ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:01:21.02 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:01:21.02 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:01:21.02 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:01:21.02 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:01:21.02 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:01:21.02 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:01:21.02 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:01:21.02 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 18:09:59.915 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:01:51.241 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:01:51.241 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:01:51.241 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 18:01:51.241 Jan 28 18:01:51.241: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 18:01:51.242 Jan 28 18:01:51.282: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:53.321: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:55.323: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:57.322: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:59.322: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:01.323: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:03.321: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:05.322: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:07.321: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:09.323: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:11.321: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:13.322: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:15.321: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:17.323: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:19.323: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 18:03:03.722 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 18:03:03.957 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:03:04.107 (1m12.866s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 18:03:04.107 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 18:03:04.107 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/28/23 18:03:04.107 Jan 28 18:03:04.371: INFO: Getting bootstrap-e2e-minion-group-hh49 Jan 28 18:03:04.372: INFO: Getting bootstrap-e2e-minion-group-sxb0 Jan 28 18:03:04.372: INFO: Getting bootstrap-e2e-minion-group-wdrf Jan 28 18:03:04.434: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-wdrf condition Ready to be true Jan 28 18:03:04.434: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-sxb0 condition Ready to be true Jan 28 18:03:04.434: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hh49 condition Ready to be true Jan 28 18:03:04.494: INFO: Node bootstrap-e2e-minion-group-wdrf has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 18:03:04.494: INFO: Node bootstrap-e2e-minion-group-hh49 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-bk5tm kube-proxy-bootstrap-e2e-minion-group-hh49 metadata-proxy-v0.1-m8bfq volume-snapshot-controller-0] Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-bk5tm kube-proxy-bootstrap-e2e-minion-group-hh49 metadata-proxy-v0.1-m8bfq volume-snapshot-controller-0] Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5zpds" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-bk5tm" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hh49" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-m8bfq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.494: INFO: Node bootstrap-e2e-minion-group-sxb0 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-br5vs" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.495: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.546: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0": Phase="Running", Reason="", readiness=true. Elapsed: 51.312541ms Jan 28 18:03:04.546: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" satisfied condition "running and ready, or succeeded" Jan 28 18:03:04.553: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=true. Elapsed: 58.735738ms Jan 28 18:03:04.553: INFO: Pod "metadata-proxy-v0.1-5zpds" satisfied condition "running and ready, or succeeded" Jan 28 18:03:04.553: INFO: Pod "kube-dns-autoscaler-5f6455f985-bk5tm": Phase="Running", Reason="", readiness=true. Elapsed: 58.850411ms Jan 28 18:03:04.553: INFO: Pod "kube-dns-autoscaler-5f6455f985-bk5tm" satisfied condition "running and ready, or succeeded" Jan 28 18:03:04.553: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.463948ms Jan 28 18:03:04.553: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:04.553: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=true. Elapsed: 58.779892ms Jan 28 18:03:04.553: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" satisfied condition "running and ready, or succeeded" Jan 28 18:03:04.553: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 18:03:04.553: INFO: Getting external IP address for bootstrap-e2e-minion-group-wdrf Jan 28 18:03:04.553: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-wdrf(34.168.17.115:22) Jan 28 18:03:04.553: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hh49": Phase="Running", Reason="", readiness=true. Elapsed: 59.024159ms Jan 28 18:03:04.553: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hh49" satisfied condition "running and ready, or succeeded" Jan 28 18:03:04.553: INFO: Pod "metadata-proxy-v0.1-m8bfq": Phase="Running", Reason="", readiness=true. Elapsed: 58.97809ms Jan 28 18:03:04.553: INFO: Pod "metadata-proxy-v0.1-m8bfq" satisfied condition "running and ready, or succeeded" Jan 28 18:03:04.553: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=true. Elapsed: 58.872168ms Jan 28 18:03:04.553: INFO: Pod "metadata-proxy-v0.1-br5vs" satisfied condition "running and ready, or succeeded" Jan 28 18:03:04.553: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 18:03:04.553: INFO: Getting external IP address for bootstrap-e2e-minion-group-sxb0 Jan 28 18:03:04.553: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-sxb0(35.197.97.48:22) Jan 28 18:03:05.069: INFO: ssh prow@34.168.17.115:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 28 18:03:05.069: INFO: ssh prow@34.168.17.115:22: stdout: "" Jan 28 18:03:05.069: INFO: ssh prow@34.168.17.115:22: stderr: "" Jan 28 18:03:05.069: INFO: ssh prow@34.168.17.115:22: exit code: 0 Jan 28 18:03:05.069: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-wdrf condition Ready to be false Jan 28 18:03:05.074: INFO: ssh prow@35.197.97.48:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 28 18:03:05.074: INFO: ssh prow@35.197.97.48:22: stdout: "" Jan 28 18:03:05.074: INFO: ssh prow@35.197.97.48:22: stderr: "" Jan 28 18:03:05.074: INFO: ssh prow@35.197.97.48:22: exit code: 0 Jan 28 18:03:05.074: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-sxb0 condition Ready to be false Jan 28 18:03:05.113: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:05.117: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:06.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.103695088s Jan 28 18:03:06.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:07.157: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:07.161: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:08.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.103063036s Jan 28 18:03:08.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:09.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:09.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:10.705: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.210934961s Jan 28 18:03:10.706: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:11.316: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:11.316: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:12.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.10122163s Jan 28 18:03:12.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:13.362: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:13.362: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:14.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.101546434s Jan 28 18:03:14.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:15.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:15.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:16.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.101244871s Jan 28 18:03:16.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:17.457: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:17.457: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:18.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.102046309s Jan 28 18:03:18.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:19.502: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:19.502: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:20.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.103803862s Jan 28 18:03:20.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:21.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:21.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:22.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.102699284s Jan 28 18:03:22.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:23.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:23.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:24.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.104643236s Jan 28 18:03:24.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:25.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:25.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:26.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.100953347s Jan 28 18:03:26.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:27.685: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:27.685: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:28.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.102146282s Jan 28 18:03:28.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:29.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:29.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:30.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.101288741s Jan 28 18:03:30.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:31.776: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:31.776: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:32.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.101981234s Jan 28 18:03:32.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:33.821: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:33.821: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:34.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.102319006s Jan 28 18:03:34.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:35.867: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:35.867: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:36.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.101545975s Jan 28 18:03:36.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:37.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:37.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:38.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.101596322s Jan 28 18:03:38.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:39.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:39.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:40.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.102203019s Jan 28 18:03:40.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:42.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:42.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:42.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.102151775s Jan 28 18:03:42.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:44.057: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:44.057: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:44.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.101272182s Jan 28 18:03:44.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:46.102: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:46.102: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:46.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.102025226s Jan 28 18:03:46.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:48.148: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:48.148: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:48.630: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.13522587s Jan 28 18:03:48.630: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:50.195: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-wdrf condition Ready to be true Jan 28 18:03:50.195: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-sxb0 condition Ready to be true Jan 28 18:03:50.240: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:03:50.240: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:03:50.600: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.105425861s Jan 28 18:03:50.600: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:52.288: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:03:52.298: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:03:52.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.104498522s Jan 28 18:03:52.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:54.336: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:03:54.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:03:54.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.10290553s Jan 28 18:03:54.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:56.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:03:56.394: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:03:56.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.10087337s Jan 28 18:03:56.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:58.427: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:03:58.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:03:58.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.10187051s Jan 28 18:03:58.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:04:00.470: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:04:00.482: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:00.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.101795081s Jan 28 18:04:00.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:04:02.515: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:04:02.526: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:02.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.102384102s Jan 28 18:04:02.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:04:04.562: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:04.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:04.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.101754703s Jan 28 18:04:04.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:04:06.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.104450149s Jan 28 18:04:06.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:04:06.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:06.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:08.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.101322657s Jan 28 18:04:08.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:04:08.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:08.659: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:10.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.101285377s Jan 28 18:04:10.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:04:10.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:10.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:12.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 1m8.101902903s Jan 28 18:04:12.596: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 18:04:12.597: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-bk5tm kube-proxy-bootstrap-e2e-minion-group-hh49 metadata-proxy-v0.1-m8bfq volume-snapshot-controller-0] Jan 28 18:04:12.597: INFO: Getting external IP address for bootstrap-e2e-minion-group-hh49 Jan 28 18:04:12.597: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-hh49(34.168.65.26:22) Jan 28 18:04:12.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:12.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:13.122: INFO: ssh prow@34.168.65.26:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 28 18:04:13.122: INFO: ssh prow@34.168.65.26:22: stdout: "" Jan 28 18:04:13.122: INFO: ssh prow@34.168.65.26:22: stderr: "" Jan 28 18:04:13.122: INFO: ssh prow@34.168.65.26:22: exit code: 0 Jan 28 18:04:13.122: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-hh49 condition Ready to be false Jan 28 18:04:13.169: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:16.442: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:16.446: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:16.446: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:18.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:18.494: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:18.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:20.538: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:20.542: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:20.542: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:22.585: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:22.590: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:22.590: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:24.630: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:24.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:24.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:26.723: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:26.734: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:26.734: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:28.768: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:28.783: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:28.783: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:30.818: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:30.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:30.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:32.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:32.882: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:32.882: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:34.916: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:34.930: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:34.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:36.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:36.979: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:36.979: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:39.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:39.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:39.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:41.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:41.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:41.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:43.105: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:43.123: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:43.123: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:45.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:45.171: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:45.171: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:47.206: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:47.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:47.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:49.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:49.263: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:49.263: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:51.295: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:51.309: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:51.309: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:53.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:53.354: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:53.354: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:55.394: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:55.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:55.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:57.447: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:57.448: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:57.448: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:59.493: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-hh49 condition Ready to be true Jan 28 18:04:59.494: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:59.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:59.537: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:01.540: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:01.540: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:01.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:03.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:03.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:03.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:05.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:05.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:05.678: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:07.700: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:07.700: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:07.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:09.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:09.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:09.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:11.795: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:11.795: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:11.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:13.842: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:13.842: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:13.856: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:15.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:15.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:15.900: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:17.934: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:17.934: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:17.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:19.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:19.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:19.990: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:22.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:22.030: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:22.037: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:24.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:24.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:24.089: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:26.127: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:26.127: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:26.138: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:28.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:28.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:28.186: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:30.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:30.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:30.234: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:32.279: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:32.279: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:32.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:34.333: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:34.333: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:34.333: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:36.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:36.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:36.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:38.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:38.431: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:38.432: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:40.482: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:40.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:40.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:42.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:42.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:42.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:44.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:44.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:44.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:46.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:46.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:46.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:48.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:48.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:48.687: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:50.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:50.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:50.734: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:52.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:52.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:52.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:54.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:54.842: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:54.842: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:56.881: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:56.890: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:56.890: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:58.989: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:59.002: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:59.002: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 18:05:59.003: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5zpds" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:05:59.003: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:05:59.062: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 59.466948ms Jan 28 18:05:59.062: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 18:03:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 18:05:59.063: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 60.014237ms Jan 28 18:05:59.063: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 18:03:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 18:06:01.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:06:01.046: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:06:01.112: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 2.109604815s Jan 28 18:06:01.112: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 18:03:48 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 18:05:58 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 18:06:01.113: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=true. Elapsed: 2.109677601s Jan 28 18:06:01.113: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" satisfied condition "running and ready, or succeeded" Jan 28 18:06:03.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:06:03.092: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:06:03.110: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=true. Elapsed: 4.107299303s Jan 28 18:06:03.110: INFO: Pod "metadata-proxy-v0.1-5zpds" satisfied condition "running and ready, or succeeded" Jan 28 18:06:03.110: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 18:06:03.110: INFO: Reboot successful on node bootstrap-e2e-minion-group-wdrf Jan 28 18:06:05.135: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:06:05.140: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 18:06:05.141: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-br5vs" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:06:05.141: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:06:05.187: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0": Phase="Running", Reason="", readiness=true. Elapsed: 46.094518ms Jan 28 18:06:05.187: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" satisfied condition "running and ready, or succeeded" Jan 28 18:06:05.187: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=true. Elapsed: 46.71371ms Jan 28 18:06:05.187: INFO: Pod "metadata-proxy-v0.1-br5vs" satisfied condition "running and ready, or succeeded" Jan 28 18:06:05.187: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 18:06:05.187: INFO: Reboot successful on node bootstrap-e2e-minion-group-sxb0 Jan 28 18:06:07.187: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:06:09.234: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:11.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:13.334: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:15.386: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:17.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:19.482: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:21.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:23.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:25.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:27.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:29.716: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:31.778: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:33.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:35.876: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:37.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:39.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:42.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:44.058: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:46.105: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:48.149: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:50.204: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:52.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:54.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:56.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:58.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:00.445: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:02.494: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:04.549: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:06.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:08.646: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:10.694: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:12.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:14.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:16.839: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:18.959: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:21.018: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:23.062: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:25.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:27.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:29.214: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:31.263: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:33.310: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:35.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:37.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:39.450: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:41.497: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:43.550: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:45.595: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:47.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:49.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:51.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:53.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:55.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:57.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:59.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:01.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:04.010: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 6m12.866s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 5m0s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 7379 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc0044d8720?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc003efe300}, {0x8146f48?, 0xc0042f5380}, {0x78135a0, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7f6080dbf098?, 0xc003efe300?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc003efe300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7381 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0xc000163520, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0x7ffeb48ed5ea, 0x3}, {0xc000163520, 0x1f}, {0x78135a0, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 18:08:06.055: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:08.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:10.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:12.190: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:14.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:16.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:18.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:20.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:22.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 6m32.869s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 5m20.003s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 7379 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0044d8720?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc003efe300}, {0x8146f48?, 0xc0042f5380}, {0x78135a0, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7f6080dbf098?, 0xc003efe300?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc003efe300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7381 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0xc000163520, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0x7ffeb48ed5ea, 0x3}, {0xc000163520, 0x1f}, {0x78135a0, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 18:08:24.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:26.504: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:28.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:30.594: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:32.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:34.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:36.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:38.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:40.820: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:42.865: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 6m52.872s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 5m40.006s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 7379 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0044d8720?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc003efe300}, {0x8146f48?, 0xc0042f5380}, {0x78135a0, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7f6080dbf098?, 0xc003efe300?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc003efe300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7381 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0xc000163520, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0x7ffeb48ed5ea, 0x3}, {0xc000163520, 0x1f}, {0x78135a0, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 18:08:44.910: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:46.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:49.002: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:51.048: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:53.091: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:55.135: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:57.180: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:59.224: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:01.342: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:03.386: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 7m12.874s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 6m0.008s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 7379 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0044d8720?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc003efe300}, {0x8146f48?, 0xc0042f5380}, {0x78135a0, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7f6080dbf098?, 0xc003efe300?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc003efe300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7381 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0xc000163520, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0x7ffeb48ed5ea, 0x3}, {0xc000163520, 0x1f}, {0x78135a0, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 18:09:05.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:07.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:09.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:11.568: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:13.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 7m32.878s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 6m20.012s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 7379 [semacquire, 7 minutes] sync.runtime_Semacquire(0xc0044d8720?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc003efe300}, {0x8146f48?, 0xc0042f5380}, {0x78135a0, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7f6080dbf098?, 0xc003efe300?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc003efe300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7381 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000e58180, 0xc0013cfd00) vendor/golang.org/x/net/http2/transport.go:1273 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003b27050, 0xc0013cfd00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:565 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:517 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc001587900?}, 0xc0013cfd00?) vendor/golang.org/x/net/http2/transport.go:3099 net/http.(*Transport).roundTrip(0xc001587900, 0xc0013cfd00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x70de840?, 0xc00265e720?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0043caf60, 0xc0013cfc00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0043fffa0, 0xc0013cfb00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0013cfb00, {0x80d5bc0, 0xc0043fffa0}, {0x75d65c0?, 0x2675501?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0043caf90, 0xc0013cfb00, {0x0?, 0x8?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0043caf90, 0xc0013cfb00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc004ad77a0, {0x7f6080dbf098, 0xc003efe300}, 0x0?) vendor/k8s.io/client-go/rest/request.go:981 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc004ad77a0, {0x7f6080dbf098, 0xc003efe300}) vendor/k8s.io/client-go/rest/request.go:1022 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*nodes).Get(0xc000de3e20, {0x7f6080dbf098, 0xc003efe300}, {0xc000163520, 0x1f}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/node.go:77 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0xc000163520, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:120 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0x7ffeb48ed5ea, 0x3}, {0xc000163520, 0x1f}, {0x78135a0, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 18:09:31.315: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:33.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:35.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:37.455: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:39.499: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:41.542: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:43.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 7m52.883s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 6m40.017s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 7379 [semacquire, 7 minutes] sync.runtime_Semacquire(0xc0044d8720?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc003efe300}, {0x8146f48?, 0xc0042f5380}, {0x78135a0, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7f6080dbf098?, 0xc003efe300?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc003efe300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7381 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0xc000163520, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0x7ffeb48ed5ea, 0x3}, {0xc000163520, 0x1f}, {0x78135a0, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 18:09:45.631: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:47.675: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:49.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:51.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:53.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:55.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:57.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:59.915: INFO: Node bootstrap-e2e-minion-group-hh49 didn't reach desired Ready condition status (true) within 5m0s Jan 28 18:09:59.915: INFO: Node bootstrap-e2e-minion-group-hh49 failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 18:09:59.915 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/28/23 18:09:59.915 (6m55.808s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:09:59.915 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 18:09:59.916 Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-57g9r to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container coredns Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container coredns Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container coredns Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": dial tcp 10.64.0.8:8181: connect: connection refused Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-57g9r_kube-system(559db3cf-0fb3-4297-b56d-0ac966ca91f7) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Readiness probe failed: Get "http://10.64.0.17:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Liveness probe failed: Get "http://10.64.0.17:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-57g9r Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-khvz4 to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.902102537s (1.902110915s including waiting) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container coredns Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container coredns Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Readiness probe failed: Get "http://10.64.0.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Liveness probe failed: Get "http://10.64.0.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container coredns Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Readiness probe failed: Get "http://10.64.0.23:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-khvz4_kube-system(27e76652-7f60-4c06-a104-08b85297ff6d) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-khvz4 Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-khvz4 Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-57g9r Jan 28 18:09:59.975: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 18:09:59.975: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 18:09:59.975: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 18:09:59.975: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 18:09:59.975: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 18:09:59.975: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 28 18:09:59.975: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 18:09:59.975: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(29ec3e483e58679ee5f59a6031c5e501) Jan 28 18:09:59.975: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 18:09:59.975: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 18:09:59.975: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 18:09:59.975: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 28 18:09:59.975: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 18:09:59.975: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(f4f6d281abb01fd97fbab9898b841ee8) Jan 28 18:09:59.975: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_43103 became leader Jan 28 18:09:59.975: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ce8c7 became leader Jan 28 18:09:59.975: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_9b332 became leader Jan 28 18:09:59.975: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_9a894 became leader Jan 28 18:09:59.975: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_61898 became leader Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-992qv to bootstrap-e2e-minion-group-wdrf Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 621.502661ms (621.511171ms including waiting) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Liveness probe failed: Get "http://10.64.1.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Killing: Stopping container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Liveness probe failed: Get "http://10.64.1.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-992qv_kube-system(758e27db-bb32-43b6-88c4-5a90b62c4cf5) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Killing: Stopping container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-992qv_kube-system(758e27db-bb32-43b6-88c4-5a90b62c4cf5) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-d8pzk to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.062343715s (1.062355674s including waiting) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-d8pzk_kube-system(486d0863-1f90-40f0-93ed-7fe799bc262e) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Liveness probe failed: Get "http://10.64.0.18:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Liveness probe failed: Get "http://10.64.0.21:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Failed: Error: failed to get sandbox container task: no running task found: task 72ef94c793950bdc4e82c5796685e4973c1c8a4236b12f0c9edf1d94661de05e not found: not found Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-s8hxz to bootstrap-e2e-minion-group-sxb0 Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 621.25441ms (621.270497ms including waiting) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Killing: Stopping container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-s8hxz_kube-system(d786037c-6845-40a1-92ac-b2f5c98572df) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Unhealthy: Liveness probe failed: Get "http://10.64.2.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Killing: Stopping container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-s8hxz_kube-system(d786037c-6845-40a1-92ac-b2f5c98572df) Jan 28 18:09:59.975: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-d8pzk Jan 28 18:09:59.975: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-992qv Jan 28 18:09:59.975: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-s8hxz Jan 28 18:09:59.975: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 18:09:59.975: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 18:09:59.975: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 18:09:59.975: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 18:09:59.975: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 28 18:09:59.975: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 18:09:59.975: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 18:09:59.975: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 18:09:59.975: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 18:09:59.975: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 28 18:09:59.975: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 28 18:09:59.975: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 18:09:59.975: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 18:09:59.975: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 18:09:59.975: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9" already present on machine Jan 28 18:09:59.975: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 18:09:59.975: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 18:09:59.975: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 18:09:59.975: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(145c4fb803387024e2117d52f54f37b0) Jan 28 18:09:59.975: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a38792f4-755d-44ff-bd20-bdecec64b9f3 became leader Jan 28 18:09:59.975: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8068c34b-35fd-4561-96a9-beff0f082a4c became leader Jan 28 18:09:59.975: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_810d8f77-741c-4245-9ac6-553fd2d92985 became leader Jan 28 18:09:59.975: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_41f35e7f-4a62-47fc-b13d-fc8899c1d95b became leader Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-bk5tm to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.947702632s (1.947710136s including waiting) Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container autoscaler Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container autoscaler Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container autoscaler Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-bk5tm_kube-system(833cbfdb-b0b3-477b-84bd-43614bc331cb) Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-bk5tm Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-bk5tm Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hh49: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9" already present on machine Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hh49: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hh49: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hh49: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hh49: {kubelet bootstrap-e2e-minion-group-hh49} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hh49: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hh49_kube-system(f211485a3e93ec83180f6ea080c6cb6d) Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hh49: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9" already present on machine Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Killing: Stopping container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-sxb0_kube-system(e7702175bb2b7fbfd431c1759e73ddbd) Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9" already present on machine Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Killing: Stopping container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-sxb0_kube-system(e7702175bb2b7fbfd431c1759e73ddbd) Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9" already present on machine Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Killing: Stopping container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-wdrf_kube-system(303cb3f0a562bd634ff0aaf3397c0679) Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9" already present on machine Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Killing: Stopping container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-wdrf_kube-system(303cb3f0a562bd634ff0aaf3397c0679) Jan 28 18:09:59.975: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9" already present on machine Jan 28 18:09:59.975: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 18:09:59.975: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 18:09:59.975: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 28 18:09:59.975: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(fa8ee856119946b06c9f561d2e82b493) Jan 28 18:09:59.975: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 28 18:09:59.975: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_1dea32c1-bf26-421e-abdc-77c8d15f19b1 became leader Jan 28 18:09:59.975: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_07dbf869-a3aa-4569-93dc-c60b4cdbf409 became leader Jan 28 18:09:59.975: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_27d586e9-4746-4406-8466-60b58ddd17fc became leader Jan 28 18:09:59.975: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_78736aa8-b0dd-4a3b-a374-dae99261b3b1 became leader Jan 28 18:09:59.975: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e4deb7a9-0008-4f7e-b14a-8c4c2cfb6b33 became leader Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-655gf to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 906.647713ms (906.658821ms including waiting) Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container default-http-backend Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container default-http-backend Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-655gf Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-655gf Jan 28 18:09:59.975: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 18:09:59.975: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 18:09:59.975: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 18:09:59.975: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 18:09:59.975: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 18:09:59.975: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 28 18:09:59.975: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-5zpds to bootstrap-e2e-minion-group-wdrf Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 757.400692ms (757.415662ms including waiting) Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container metadata-proxy Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container metadata-proxy Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.591061119s (1.591069718s including waiting) Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container prometheus-to-sd-exporter Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container prometheus-to-sd-exporter Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container metadata-proxy Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container metadata-proxy Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container prometheus-to-sd-exporter Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container prometheus-to-sd-exporter Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-77zds: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-77zds to bootstrap-e2e-master Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 949.395401ms (949.403046ms including waiting) Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.936540468s (1.936556682s including waiting) Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-br5vs to bootstrap-e2e-minion-group-sxb0 Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 802.34377ms (802.403929ms including waiting) Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container metadata-proxy Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container metadata-proxy Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.988933073s (1.988958398s including waiting) Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container metadata-proxy Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container metadata-proxy Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-m8bfq to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 768.631082ms (768.65504ms including waiting) Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container metadata-proxy Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container metadata-proxy Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.779258021s (1.779273981s including waiting) Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-m8bfq Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-5zpds Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-br5vs Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-77zds Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-ns4h4 to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.266660795s (2.266668719s including waiting) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.177425302s (1.177442205s including waiting) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-6764bf875c-ns4h4_kube-system(ec05df0e-80d9-4c50-934f-51a6c70162e5) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-6764bf875c-ns4h4_kube-system(ec05df0e-80d9-4c50-934f-51a6c70162e5) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-ns4h4 Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-ns4h4 Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-459jr to bootstrap-e2e-minion-group-wdrf Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.32336706s (1.323386564s including waiting) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 947.438414ms (947.452201ms including waiting) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Killing: Stopping container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Killing: Stopping container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Liveness probe failed: Get "https://10.64.1.4:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-459jr_kube-system(0b875c70-ea77-4ce6-89d4-b06d714cae18) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Readiness probe failed: Get "https://10.64.1.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-459jr Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Readiness probe failed: Get "https://10.64.1.15:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Liveness probe failed: Get "https://10.64.1.15:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-459jr Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.452024678s (2.452032898s including waiting) Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container volume-snapshot-controller Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container volume-snapshot-controller Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container volume-snapshot-controller Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(89171dd3-bbcb-4863-8db1-bf282b44eb66) Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:09:59.976 (61ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:09:59.976 Jan 28 18:09:59.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 18:10:00.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:02.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:04.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:06.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:08.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:10.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:12.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:14.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:16.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:18.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:20.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:22.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:24.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:26.075: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:28.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:30.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:32.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:34.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:36.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:38.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:40.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:42.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:44.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:46.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:48.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:50.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:52.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:54.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:56.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:58.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:00.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:02.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:04.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:06.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:08.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:10.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:12.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:14.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:16.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:18.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:20.120: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:22.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:24.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:26.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:28.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:30.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:32.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:34.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:36.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:38.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:40.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:42.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:44.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:46.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:48.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:50.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:52.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:54.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:56.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:58.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:12:00.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:12:02.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:12:01 +0000 UTC}]. Failure Jan 28 18:12:04.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 18:12:01 +0000 UTC}]. Failure Jan 28 18:12:06.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 18:12:01 +0000 UTC}]. Failure < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:12:08.076 (2m8.1s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 18:12:08.076 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 18:12:08.076 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:12:08.076 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:12:08.076 STEP: Collecting events from namespace "reboot-4450". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 18:12:08.076 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 18:12:08.127 Jan 28 18:12:08.189: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 18:12:08.189: INFO: Jan 28 18:12:08.239: INFO: Logging node info for node bootstrap-e2e-master Jan 28 18:12:08.290: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 09bb0353-b867-43a6-9f64-6e45f9c4aeb9 2751 0 2023-01-28 17:48:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 17:48:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 17:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 17:49:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 18:10:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-serial-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858378752 0} {<nil>} 3767948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596234752 0} {<nil>} 3511948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 17:49:15 +0000 UTC,LastTransitionTime:2023-01-28 17:49:15 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 18:10:25 +0000 UTC,LastTransitionTime:2023-01-28 17:48:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 18:10:25 +0000 UTC,LastTransitionTime:2023-01-28 17:48:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 18:10:25 +0000 UTC,LastTransitionTime:2023-01-28 17:48:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 18:10:25 +0000 UTC,LastTransitionTime:2023-01-28 17:49:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.247.33.232,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-serial-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-serial-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ceaf667f6b5e1324cd116eb2db802512,SystemUUID:ceaf667f-6b5e-1324-cd11-6eb2db802512,BootID:79f7efc7-8b19-44a9-8ebd-59b6af441d89,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3,KubeletVersion:v1.27.0-alpha.1.69+d7cb1c54a540c9,KubeProxyVersion:v1.27.0-alpha.1.69+d7cb1c54a540c9,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9],SizeBytes:125274937,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9],SizeBytes:57551160,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 18:12:08.291: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 18:12:08.375: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 18:12:08.530: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-28 17:48:12 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container kube-apiserver ready: true, restart count 4 Jan 28 18:12:08.530: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-28 17:48:12 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container kube-scheduler ready: true, restart count 5 Jan 28 18:12:08.530: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-28 17:48:12 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container konnectivity-server-container ready: true, restart count 4 Jan 28 18:12:08.530: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-28 17:48:12 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container kube-controller-manager ready: true, restart count 7 Jan 28 18:12:08.530: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-28 17:48:12 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container etcd-container ready: true, restart count 3 Jan 28 18:12:08.530: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-28 17:48:12 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container etcd-container ready: true, restart count 2 Jan 28 18:12:08.530: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-28 17:48:31 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container kube-addon-manager ready: true, restart count 4 Jan 28 18:12:08.530: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-28 17:48:31 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container l7-lb-controller ready: true, restart count 6 Jan 28 18:12:08.530: INFO: metadata-proxy-v0.1-77zds started at 2023-01-28 17:49:13 +0000 UTC (0+2 container statuses recorded) Jan 28 18:12:08.530: INFO: Container metadata-proxy ready: true, restart count 0 Jan 28 18:12:08.530: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 28 18:12:08.778: INFO: Latency metrics for node bootstrap-e2e-master Jan 28 18:12:08.778: INFO: Logging node info for node bootstrap-e2e-minion-group-hh49 Jan 28 18:12:08.822: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-hh49 b58c3df1-874b-46d3-a80a-d5aa409735f5 2935 0 2023-01-28 17:48:55 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-hh49 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 17:48:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 18:04:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 18:12:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 18:12:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 18:12:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-serial-1-2/us-west1-b/bootstrap-e2e-minion-group-hh49,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 18:12:07 +0000 UTC,LastTransitionTime:2023-01-28 18:07:05 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 18:12:07 +0000 UTC,LastTransitionTime:2023-01-28 18:07:05 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 18:12:07 +0000 UTC,LastTransitionTime:2023-01-28 18:07:06 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 18:12:07 +0000 UTC,LastTransitionTime:2023-01-28 18:07:06 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 18:12:07 +0000 UTC,LastTransitionTime:2023-01-28 18:07:06 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 18:12:07 +0000 UTC,LastTransitionTime:2023-01-28 18:07:05 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 18:12:07 +0000 UTC,LastTransitionTime:2023-01-28 18:07:06 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 17:49:04 +0000 UTC,LastTransitionTime:2023-01-28 17:49:04 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 18:12:02 +0000 UTC,LastTransitionTime:2023-01-28 18:12:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 18:12:02 +0000 UTC,LastTransitionTime:2023-01-28 18:12:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 18:12:02 +0000 UTC,LastTransitionTime:2023-01-28 18:12:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 18:12:02 +0000 UTC,LastTransitionTime:2023-01-28 18:12:02 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.65.26,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-hh49.c.k8s-jkns-e2e-gce-serial-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-hh49.c.k8s-jkns-e2e-gce-serial-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4f73fe47b8cfe6109fc20d7e94c98130,SystemUUID:4f73fe47-b8cf-e610-9fc2-0d7e94c98130,BootID:ff674672-75de-440b-9579-1045113effda,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3,KubeletVersion:v1.27.0-alpha.1.69+d7cb1c54a540c9,KubeProxyVersion:v1.27.0-alpha.1.69+d7cb1c54a540c9,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 18:12:08.823: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-hh49 Jan 28 18:12:08.886: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-hh49 Jan 28 18:12:09.388: INFO: l7-default-backend-8549d69d99-655gf started at 2023-01-28 17:49:04 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:09.388: INFO: Container default-http-backend ready: false, restart count 1 Jan 28 18:12:09.388: INFO: metadata-proxy-v0.1-m8bfq started at 2023-01-28 17:48:56 +0000 UTC (0+2 container statuses recorded) Jan 28 18:12:09.388: INFO: Container metadata-proxy ready: true, restart count 1 Jan 28 18:12:09.388: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 28 18:12:09.388: INFO: volume-snapshot-controller-0 started at 2023-01-28 17:49:04 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:09.388: INFO: Container volume-snapshot-controller ready: true, restart count 8 Jan 28 18:12:09.388: INFO: coredns-6846b5b5f-khvz4 started at 2023-01-28 17:49:04 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:09.388: INFO: Container coredns ready: false, restart count 4 Jan 28 18:12:09.388: INFO: kube-dns-autoscaler-5f6455f985-bk5tm started at 2023-01-28 17:49:04 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:09.388: INFO: Container autoscaler ready: false, restart count 4 Jan 28 18:12:09.388: INFO: konnectivity-agent-d8pzk started at 2023-01-28 17:49:04 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:09.388: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 28 18:12:09.388: INFO: coredns-6846b5b5f-57g9r started at 2023-01-28 17:49:12 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:09.388: INFO: Container coredns ready: false, restart count 6 Jan 28 18:12:09.388: INFO: kube-proxy-bootstrap-e2e-minion-group-hh49 started at 2023-01-28 17:48:55 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:09.388: INFO: Container kube-proxy ready: true, restart count 6 Jan 28 18:12:45.801: INFO: Latency metrics for node bootstrap-e2e-minion-group-hh49 Jan 28 18:12:45.801: INFO: Logging node info for node bootstrap-e2e-minion-group-sxb0 Jan 28 18:12:45.840: INFO: Error getting node info Get "https://35.247.33.232/api/v1/nodes/bootstrap-e2e-minion-group-sxb0": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:12:45.840: INFO: Node Info: &Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 18:12:45.840: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-sxb0 Jan 28 18:12:45.880: INFO: Unexpected error retrieving node events Get "https://35.247.33.232/api/v1/namespaces/kube-system/events?fieldSelector=involvedObject.kind%3DNode%2CinvolvedObject.namespace%3D%2Csource%3Dkubelet%2CinvolvedObject.name%3Dbootstrap-e2e-minion-group-sxb0": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:12:45.880: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-sxb0 Jan 28 18:12:45.919: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-sxb0: Get "https://35.247.33.232/api/v1/nodes/bootstrap-e2e-minion-group-sxb0:10250/proxy/pods": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:12:45.919: INFO: Logging node info for node bootstrap-e2e-minion-group-wdrf Jan 28 18:12:45.959: INFO: Error getting node info Get "https://35.247.33.232/api/v1/nodes/bootstrap-e2e-minion-group-wdrf": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:12:45.959: INFO: Node Info: &Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 18:12:45.959: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-wdrf Jan 28 18:12:45.998: INFO: Unexpected error retrieving node events Get "https://35.247.33.232/api/v1/namespaces/kube-system/events?fieldSelector=involvedObject.kind%3DNode%2CinvolvedObject.namespace%3D%2Csource%3Dkubelet%2CinvolvedObject.name%3Dbootstrap-e2e-minion-group-wdrf": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:12:45.998: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-wdrf Jan 28 18:12:46.038: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-wdrf: Get "https://35.247.33.232/api/v1/nodes/bootstrap-e2e-minion-group-wdrf:10250/proxy/pods": dial tcp 35.247.33.232:443: connect: connection refused END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:12:46.038 (37.962s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:12:46.038 (37.962s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:12:46.038 STEP: Destroying namespace "reboot-4450" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 18:12:46.038 [FAILED] Couldn't delete ns: "reboot-4450": Delete "https://35.247.33.232/api/v1/namespaces/reboot-4450": dial tcp 35.247.33.232:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.247.33.232/api/v1/namespaces/reboot-4450", Err:(*net.OpError)(0xc00342c370)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/28/23 18:12:46.078 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:12:46.078 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:12:46.078 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:12:46.078 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 18:09:59.915 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:01:51.241 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:01:51.241 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:01:51.241 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 18:01:51.241 Jan 28 18:01:51.241: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 18:01:51.242 Jan 28 18:01:51.282: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:53.321: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:55.323: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:57.322: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:59.322: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:01.323: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:03.321: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:05.322: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:07.321: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:09.323: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:11.321: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:13.322: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:15.321: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:17.323: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:02:19.323: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 18:03:03.722 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 18:03:03.957 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:03:04.107 (1m12.866s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 18:03:04.107 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 18:03:04.107 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/28/23 18:03:04.107 Jan 28 18:03:04.371: INFO: Getting bootstrap-e2e-minion-group-hh49 Jan 28 18:03:04.372: INFO: Getting bootstrap-e2e-minion-group-sxb0 Jan 28 18:03:04.372: INFO: Getting bootstrap-e2e-minion-group-wdrf Jan 28 18:03:04.434: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-wdrf condition Ready to be true Jan 28 18:03:04.434: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-sxb0 condition Ready to be true Jan 28 18:03:04.434: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hh49 condition Ready to be true Jan 28 18:03:04.494: INFO: Node bootstrap-e2e-minion-group-wdrf has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 18:03:04.494: INFO: Node bootstrap-e2e-minion-group-hh49 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-bk5tm kube-proxy-bootstrap-e2e-minion-group-hh49 metadata-proxy-v0.1-m8bfq volume-snapshot-controller-0] Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-bk5tm kube-proxy-bootstrap-e2e-minion-group-hh49 metadata-proxy-v0.1-m8bfq volume-snapshot-controller-0] Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5zpds" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-bk5tm" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hh49" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-m8bfq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.494: INFO: Node bootstrap-e2e-minion-group-sxb0 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-br5vs" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.494: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.495: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:03:04.546: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0": Phase="Running", Reason="", readiness=true. Elapsed: 51.312541ms Jan 28 18:03:04.546: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" satisfied condition "running and ready, or succeeded" Jan 28 18:03:04.553: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=true. Elapsed: 58.735738ms Jan 28 18:03:04.553: INFO: Pod "metadata-proxy-v0.1-5zpds" satisfied condition "running and ready, or succeeded" Jan 28 18:03:04.553: INFO: Pod "kube-dns-autoscaler-5f6455f985-bk5tm": Phase="Running", Reason="", readiness=true. Elapsed: 58.850411ms Jan 28 18:03:04.553: INFO: Pod "kube-dns-autoscaler-5f6455f985-bk5tm" satisfied condition "running and ready, or succeeded" Jan 28 18:03:04.553: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.463948ms Jan 28 18:03:04.553: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:04.553: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=true. Elapsed: 58.779892ms Jan 28 18:03:04.553: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" satisfied condition "running and ready, or succeeded" Jan 28 18:03:04.553: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 18:03:04.553: INFO: Getting external IP address for bootstrap-e2e-minion-group-wdrf Jan 28 18:03:04.553: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-wdrf(34.168.17.115:22) Jan 28 18:03:04.553: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hh49": Phase="Running", Reason="", readiness=true. Elapsed: 59.024159ms Jan 28 18:03:04.553: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hh49" satisfied condition "running and ready, or succeeded" Jan 28 18:03:04.553: INFO: Pod "metadata-proxy-v0.1-m8bfq": Phase="Running", Reason="", readiness=true. Elapsed: 58.97809ms Jan 28 18:03:04.553: INFO: Pod "metadata-proxy-v0.1-m8bfq" satisfied condition "running and ready, or succeeded" Jan 28 18:03:04.553: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=true. Elapsed: 58.872168ms Jan 28 18:03:04.553: INFO: Pod "metadata-proxy-v0.1-br5vs" satisfied condition "running and ready, or succeeded" Jan 28 18:03:04.553: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 18:03:04.553: INFO: Getting external IP address for bootstrap-e2e-minion-group-sxb0 Jan 28 18:03:04.553: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-sxb0(35.197.97.48:22) Jan 28 18:03:05.069: INFO: ssh prow@34.168.17.115:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 28 18:03:05.069: INFO: ssh prow@34.168.17.115:22: stdout: "" Jan 28 18:03:05.069: INFO: ssh prow@34.168.17.115:22: stderr: "" Jan 28 18:03:05.069: INFO: ssh prow@34.168.17.115:22: exit code: 0 Jan 28 18:03:05.069: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-wdrf condition Ready to be false Jan 28 18:03:05.074: INFO: ssh prow@35.197.97.48:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 28 18:03:05.074: INFO: ssh prow@35.197.97.48:22: stdout: "" Jan 28 18:03:05.074: INFO: ssh prow@35.197.97.48:22: stderr: "" Jan 28 18:03:05.074: INFO: ssh prow@35.197.97.48:22: exit code: 0 Jan 28 18:03:05.074: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-sxb0 condition Ready to be false Jan 28 18:03:05.113: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:05.117: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:06.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.103695088s Jan 28 18:03:06.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:07.157: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:07.161: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:08.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.103063036s Jan 28 18:03:08.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:09.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:09.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:10.705: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.210934961s Jan 28 18:03:10.706: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:11.316: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:11.316: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:12.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.10122163s Jan 28 18:03:12.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:13.362: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:13.362: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:14.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.101546434s Jan 28 18:03:14.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:15.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:15.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:16.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.101244871s Jan 28 18:03:16.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:17.457: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:17.457: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:18.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.102046309s Jan 28 18:03:18.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:19.502: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:19.502: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:20.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.103803862s Jan 28 18:03:20.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:21.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:21.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:22.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.102699284s Jan 28 18:03:22.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:23.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:23.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:24.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.104643236s Jan 28 18:03:24.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:25.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:25.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:26.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.100953347s Jan 28 18:03:26.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:27.685: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:27.685: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:28.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.102146282s Jan 28 18:03:28.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:29.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:29.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:30.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.101288741s Jan 28 18:03:30.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:31.776: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:31.776: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:32.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.101981234s Jan 28 18:03:32.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:33.821: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:33.821: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:34.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.102319006s Jan 28 18:03:34.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:35.867: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:35.867: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:36.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.101545975s Jan 28 18:03:36.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:37.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:37.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:38.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.101596322s Jan 28 18:03:38.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:39.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:39.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:40.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.102203019s Jan 28 18:03:40.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:42.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:42.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:42.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.102151775s Jan 28 18:03:42.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:44.057: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:44.057: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:44.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.101272182s Jan 28 18:03:44.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:46.102: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:46.102: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:46.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.102025226s Jan 28 18:03:46.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:48.148: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:48.148: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:03:48.630: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.13522587s Jan 28 18:03:48.630: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:50.195: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-wdrf condition Ready to be true Jan 28 18:03:50.195: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-sxb0 condition Ready to be true Jan 28 18:03:50.240: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:03:50.240: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:03:50.600: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.105425861s Jan 28 18:03:50.600: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:52.288: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:03:52.298: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:03:52.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.104498522s Jan 28 18:03:52.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:54.336: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:03:54.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:03:54.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.10290553s Jan 28 18:03:54.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:56.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:03:56.394: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:03:56.595: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.10087337s Jan 28 18:03:56.595: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:03:58.427: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:03:58.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:03:58.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.10187051s Jan 28 18:03:58.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:04:00.470: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:04:00.482: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:00.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.101795081s Jan 28 18:04:00.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:04:02.515: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:04:02.526: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:02.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.102384102s Jan 28 18:04:02.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:04:04.562: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:04.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:04.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.101754703s Jan 28 18:04:04.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:04:06.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.104450149s Jan 28 18:04:06.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:04:06.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:06.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:08.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.101322657s Jan 28 18:04:08.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:04:08.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:08.659: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:10.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.101285377s Jan 28 18:04:10.596: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hh49' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:09 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:04 +0000 UTC }] Jan 28 18:04:10.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:10.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:12.596: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 1m8.101902903s Jan 28 18:04:12.596: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 18:04:12.597: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-bk5tm kube-proxy-bootstrap-e2e-minion-group-hh49 metadata-proxy-v0.1-m8bfq volume-snapshot-controller-0] Jan 28 18:04:12.597: INFO: Getting external IP address for bootstrap-e2e-minion-group-hh49 Jan 28 18:04:12.597: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-hh49(34.168.65.26:22) Jan 28 18:04:12.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:12.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:13.122: INFO: ssh prow@34.168.65.26:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 28 18:04:13.122: INFO: ssh prow@34.168.65.26:22: stdout: "" Jan 28 18:04:13.122: INFO: ssh prow@34.168.65.26:22: stderr: "" Jan 28 18:04:13.122: INFO: ssh prow@34.168.65.26:22: exit code: 0 Jan 28 18:04:13.122: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-hh49 condition Ready to be false Jan 28 18:04:13.169: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:16.442: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:16.446: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:16.446: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:18.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:18.494: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:18.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:20.538: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:20.542: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:20.542: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:22.585: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:22.590: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:22.590: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:24.630: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:24.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:24.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:26.723: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:26.734: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:26.734: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:28.768: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:28.783: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:28.783: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:30.818: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:30.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:30.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:32.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:32.882: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:32.882: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:34.916: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:34.930: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:34.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:36.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:36.979: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:36.979: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:39.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:39.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:39.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:41.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:41.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:41.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:43.105: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:43.123: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:43.123: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:45.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:45.171: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:45.171: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:47.206: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:47.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:47.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:49.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:49.263: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:49.263: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:51.295: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:51.309: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:51.309: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:53.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:53.354: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:53.354: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:55.394: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:55.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:55.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:57.447: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 18:04:57.448: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:57.448: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:59.493: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-hh49 condition Ready to be true Jan 28 18:04:59.494: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:04:59.495: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:04:59.537: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:01.540: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:01.540: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:01.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:03.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:03.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:03.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:05.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:05.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:05.678: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:07.700: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:07.700: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:07.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:09.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:09.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:09.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:11.795: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:11.795: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:11.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:13.842: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:13.842: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:13.856: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:15.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:15.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:15.900: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:17.934: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:17.934: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:17.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:19.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:19.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:19.990: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:22.026: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:22.030: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:22.037: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:24.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:24.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:24.089: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:26.127: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:26.127: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:26.138: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:28.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:28.176: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:28.186: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:30.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:30.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:30.234: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:32.279: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:32.279: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:32.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:34.333: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:34.333: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:34.333: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:36.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:36.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:36.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:38.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:38.431: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:38.432: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:40.482: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:40.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:40.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:42.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:42.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:42.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:44.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:44.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:44.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:46.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:46.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:46.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:48.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:48.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:48.687: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:50.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:50.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:50.734: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:52.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:52.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:52.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:54.830: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:54.842: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:54.842: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:56.881: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:56.890: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:56.890: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:04:03 +0000 UTC}]. Failure Jan 28 18:05:58.989: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:05:59.002: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:03:48 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:05:59.002: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 18:05:59.003: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5zpds" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:05:59.003: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:05:59.062: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 59.466948ms Jan 28 18:05:59.062: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 18:03:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:59:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 18:05:59.063: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 60.014237ms Jan 28 18:05:59.063: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 18:03:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 18:06:01.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:06:01.046: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:06:01.112: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 2.109604815s Jan 28 18:06:01.112: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 18:03:48 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 18:05:58 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 18:06:01.113: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=true. Elapsed: 2.109677601s Jan 28 18:06:01.113: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" satisfied condition "running and ready, or succeeded" Jan 28 18:06:03.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:06:03.092: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 18:03:53 +0000 UTC}]. Failure Jan 28 18:06:03.110: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=true. Elapsed: 4.107299303s Jan 28 18:06:03.110: INFO: Pod "metadata-proxy-v0.1-5zpds" satisfied condition "running and ready, or succeeded" Jan 28 18:06:03.110: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 18:06:03.110: INFO: Reboot successful on node bootstrap-e2e-minion-group-wdrf Jan 28 18:06:05.135: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:06:05.140: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 18:06:05.141: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-br5vs" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:06:05.141: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 18:06:05.187: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0": Phase="Running", Reason="", readiness=true. Elapsed: 46.094518ms Jan 28 18:06:05.187: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" satisfied condition "running and ready, or succeeded" Jan 28 18:06:05.187: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=true. Elapsed: 46.71371ms Jan 28 18:06:05.187: INFO: Pod "metadata-proxy-v0.1-br5vs" satisfied condition "running and ready, or succeeded" Jan 28 18:06:05.187: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 18:06:05.187: INFO: Reboot successful on node bootstrap-e2e-minion-group-sxb0 Jan 28 18:06:07.187: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 18:06:09.234: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:11.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:13.334: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:15.386: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:17.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:19.482: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:21.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:23.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:25.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:27.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:29.716: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:31.778: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:33.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:35.876: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:37.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:39.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:42.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:44.058: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:46.105: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:48.149: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:50.204: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:52.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:54.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:56.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:06:58.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:00.445: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:02.494: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:04.549: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:06.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:08.646: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:10.694: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:12.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:14.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:16.839: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:18.959: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:21.018: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:23.062: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:25.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:27.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:29.214: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:31.263: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:33.310: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:35.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:37.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:39.450: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:41.497: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:43.550: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:45.595: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:47.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:49.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:51.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:53.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:55.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:57.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:07:59.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:01.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:04.010: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 6m12.866s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 5m0s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 7379 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc0044d8720?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc003efe300}, {0x8146f48?, 0xc0042f5380}, {0x78135a0, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7f6080dbf098?, 0xc003efe300?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc003efe300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7381 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0xc000163520, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0x7ffeb48ed5ea, 0x3}, {0xc000163520, 0x1f}, {0x78135a0, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 18:08:06.055: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:08.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:10.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:12.190: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:14.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:16.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:18.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:20.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:22.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 6m32.869s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 5m20.003s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 7379 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0044d8720?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc003efe300}, {0x8146f48?, 0xc0042f5380}, {0x78135a0, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7f6080dbf098?, 0xc003efe300?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc003efe300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7381 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0xc000163520, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0x7ffeb48ed5ea, 0x3}, {0xc000163520, 0x1f}, {0x78135a0, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 18:08:24.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:26.504: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:28.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:30.594: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:32.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:34.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:36.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:38.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:40.820: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:42.865: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 6m52.872s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 5m40.006s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 7379 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0044d8720?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc003efe300}, {0x8146f48?, 0xc0042f5380}, {0x78135a0, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7f6080dbf098?, 0xc003efe300?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc003efe300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7381 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0xc000163520, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0x7ffeb48ed5ea, 0x3}, {0xc000163520, 0x1f}, {0x78135a0, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 18:08:44.910: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:46.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:49.002: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:51.048: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:53.091: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:55.135: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:57.180: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:08:59.224: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:01.342: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:03.386: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 7m12.874s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 6m0.008s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 7379 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc0044d8720?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc003efe300}, {0x8146f48?, 0xc0042f5380}, {0x78135a0, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7f6080dbf098?, 0xc003efe300?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc003efe300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7381 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0xc000163520, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0x7ffeb48ed5ea, 0x3}, {0xc000163520, 0x1f}, {0x78135a0, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 18:09:05.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:07.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:09.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:11.568: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:13.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 7m32.878s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 6m20.012s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 7379 [semacquire, 7 minutes] sync.runtime_Semacquire(0xc0044d8720?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc003efe300}, {0x8146f48?, 0xc0042f5380}, {0x78135a0, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7f6080dbf098?, 0xc003efe300?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc003efe300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7381 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000e58180, 0xc0013cfd00) vendor/golang.org/x/net/http2/transport.go:1273 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003b27050, 0xc0013cfd00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:565 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:517 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc001587900?}, 0xc0013cfd00?) vendor/golang.org/x/net/http2/transport.go:3099 net/http.(*Transport).roundTrip(0xc001587900, 0xc0013cfd00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x70de840?, 0xc00265e720?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0043caf60, 0xc0013cfc00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0043fffa0, 0xc0013cfb00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0013cfb00, {0x80d5bc0, 0xc0043fffa0}, {0x75d65c0?, 0x2675501?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0043caf90, 0xc0013cfb00, {0x0?, 0x8?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0043caf90, 0xc0013cfb00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc004ad77a0, {0x7f6080dbf098, 0xc003efe300}, 0x0?) vendor/k8s.io/client-go/rest/request.go:981 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc004ad77a0, {0x7f6080dbf098, 0xc003efe300}) vendor/k8s.io/client-go/rest/request.go:1022 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*nodes).Get(0xc000de3e20, {0x7f6080dbf098, 0xc003efe300}, {0xc000163520, 0x1f}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/node.go:77 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0xc000163520, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:120 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0x7ffeb48ed5ea, 0x3}, {0xc000163520, 0x1f}, {0x78135a0, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 18:09:31.315: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:33.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:35.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:37.455: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:39.499: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:41.542: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:43.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart (Spec Runtime: 7m52.883s) test/e2e/cloud/gcp/reboot.go:97 In [It] (Node Runtime: 6m40.017s) test/e2e/cloud/gcp/reboot.go:97 Spec Goroutine goroutine 7379 [semacquire, 7 minutes] sync.runtime_Semacquire(0xc0044d8720?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc003efe300}, {0x8146f48?, 0xc0042f5380}, {0x78135a0, 0x37}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.3({0x7f6080dbf098?, 0xc003efe300?}) test/e2e/cloud/gcp/reboot.go:100 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc003efe300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 7381 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0xc000163520, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc003efe300}, {0x8146f48, 0xc0042f5380}, {0x7ffeb48ed5ea, 0x3}, {0xc000163520, 0x1f}, {0x78135a0, 0x37}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 18:09:45.631: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:47.675: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:49.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:51.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:53.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:55.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:57.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:09:59.915: INFO: Node bootstrap-e2e-minion-group-hh49 didn't reach desired Ready condition status (true) within 5m0s Jan 28 18:09:59.915: INFO: Node bootstrap-e2e-minion-group-hh49 failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 18:09:59.915 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/28/23 18:09:59.915 (6m55.808s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:09:59.915 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 18:09:59.916 Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-57g9r to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container coredns Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container coredns Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container coredns Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": dial tcp 10.64.0.8:8181: connect: connection refused Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-57g9r_kube-system(559db3cf-0fb3-4297-b56d-0ac966ca91f7) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Readiness probe failed: Get "http://10.64.0.17:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Liveness probe failed: Get "http://10.64.0.17:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-57g9r: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-57g9r Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-khvz4 to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.902102537s (1.902110915s including waiting) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container coredns Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container coredns Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Readiness probe failed: Get "http://10.64.0.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Liveness probe failed: Get "http://10.64.0.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container coredns Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Readiness probe failed: Get "http://10.64.0.23:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-khvz4_kube-system(27e76652-7f60-4c06-a104-08b85297ff6d) Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f-khvz4: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-khvz4 Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-khvz4 Jan 28 18:09:59.975: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-57g9r Jan 28 18:09:59.975: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 28 18:09:59.975: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 28 18:09:59.975: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 18:09:59.975: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 18:09:59.975: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 18:09:59.975: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 28 18:09:59.975: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 18:09:59.975: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(29ec3e483e58679ee5f59a6031c5e501) Jan 28 18:09:59.975: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 28 18:09:59.975: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 28 18:09:59.975: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 28 18:09:59.975: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 28 18:09:59.975: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Jan 28 18:09:59.975: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(f4f6d281abb01fd97fbab9898b841ee8) Jan 28 18:09:59.975: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_43103 became leader Jan 28 18:09:59.975: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ce8c7 became leader Jan 28 18:09:59.975: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_9b332 became leader Jan 28 18:09:59.975: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_9a894 became leader Jan 28 18:09:59.975: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_61898 became leader Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-992qv to bootstrap-e2e-minion-group-wdrf Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 621.502661ms (621.511171ms including waiting) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Liveness probe failed: Get "http://10.64.1.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Killing: Stopping container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Liveness probe failed: Get "http://10.64.1.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-992qv_kube-system(758e27db-bb32-43b6-88c4-5a90b62c4cf5) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} Killing: Stopping container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-992qv: {kubelet bootstrap-e2e-minion-group-wdrf} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-992qv_kube-system(758e27db-bb32-43b6-88c4-5a90b62c4cf5) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-d8pzk to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.062343715s (1.062355674s including waiting) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-d8pzk_kube-system(486d0863-1f90-40f0-93ed-7fe799bc262e) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Liveness probe failed: Get "http://10.64.0.18:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Liveness probe failed: Get "http://10.64.0.21:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {kubelet bootstrap-e2e-minion-group-hh49} Failed: Error: failed to get sandbox container task: no running task found: task 72ef94c793950bdc4e82c5796685e4973c1c8a4236b12f0c9edf1d94661de05e not found: not found Jan 28 18:09:59.975: INFO: event for konnectivity-agent-d8pzk: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-s8hxz to bootstrap-e2e-minion-group-sxb0 Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 621.25441ms (621.270497ms including waiting) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Killing: Stopping container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-s8hxz_kube-system(d786037c-6845-40a1-92ac-b2f5c98572df) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Unhealthy: Liveness probe failed: Get "http://10.64.2.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} Killing: Stopping container konnectivity-agent Jan 28 18:09:59.975: INFO: event for konnectivity-agent-s8hxz: {kubelet bootstrap-e2e-minion-group-sxb0} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-s8hxz_kube-system(d786037c-6845-40a1-92ac-b2f5c98572df) Jan 28 18:09:59.975: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-d8pzk Jan 28 18:09:59.975: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-992qv Jan 28 18:09:59.975: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-s8hxz Jan 28 18:09:59.975: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 28 18:09:59.975: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 28 18:09:59.975: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 28 18:09:59.975: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 28 18:09:59.975: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 28 18:09:59.975: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 28 18:09:59.975: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 28 18:09:59.975: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 28 18:09:59.975: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 28 18:09:59.975: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 28 18:09:59.975: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 28 18:09:59.975: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 18:09:59.975: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 28 18:09:59.975: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 18:09:59.975: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9" already present on machine Jan 28 18:09:59.975: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 28 18:09:59.975: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 28 18:09:59.975: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 28 18:09:59.975: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(145c4fb803387024e2117d52f54f37b0) Jan 28 18:09:59.975: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a38792f4-755d-44ff-bd20-bdecec64b9f3 became leader Jan 28 18:09:59.975: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8068c34b-35fd-4561-96a9-beff0f082a4c became leader Jan 28 18:09:59.975: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_810d8f77-741c-4245-9ac6-553fd2d92985 became leader Jan 28 18:09:59.975: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_41f35e7f-4a62-47fc-b13d-fc8899c1d95b became leader Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-bk5tm to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.947702632s (1.947710136s including waiting) Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container autoscaler Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container autoscaler Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container autoscaler Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-bk5tm_kube-system(833cbfdb-b0b3-477b-84bd-43614bc331cb) Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-bk5tm Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985-bk5tm: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-bk5tm Jan 28 18:09:59.975: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hh49: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9" already present on machine Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hh49: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hh49: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hh49: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hh49: {kubelet bootstrap-e2e-minion-group-hh49} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hh49: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hh49_kube-system(f211485a3e93ec83180f6ea080c6cb6d) Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hh49: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9" already present on machine Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Killing: Stopping container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-sxb0_kube-system(e7702175bb2b7fbfd431c1759e73ddbd) Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9" already present on machine Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} Killing: Stopping container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-sxb0: {kubelet bootstrap-e2e-minion-group-sxb0} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-sxb0_kube-system(e7702175bb2b7fbfd431c1759e73ddbd) Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9" already present on machine Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Killing: Stopping container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-wdrf_kube-system(303cb3f0a562bd634ff0aaf3397c0679) Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9" already present on machine Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} Killing: Stopping container kube-proxy Jan 28 18:09:59.975: INFO: event for kube-proxy-bootstrap-e2e-minion-group-wdrf: {kubelet bootstrap-e2e-minion-group-wdrf} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-wdrf_kube-system(303cb3f0a562bd634ff0aaf3397c0679) Jan 28 18:09:59.975: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9" already present on machine Jan 28 18:09:59.975: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 28 18:09:59.975: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 28 18:09:59.975: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 28 18:09:59.975: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(fa8ee856119946b06c9f561d2e82b493) Jan 28 18:09:59.975: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 28 18:09:59.975: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_1dea32c1-bf26-421e-abdc-77c8d15f19b1 became leader Jan 28 18:09:59.975: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_07dbf869-a3aa-4569-93dc-c60b4cdbf409 became leader Jan 28 18:09:59.975: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_27d586e9-4746-4406-8466-60b58ddd17fc became leader Jan 28 18:09:59.975: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_78736aa8-b0dd-4a3b-a374-dae99261b3b1 became leader Jan 28 18:09:59.975: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e4deb7a9-0008-4f7e-b14a-8c4c2cfb6b33 became leader Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-655gf to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 906.647713ms (906.658821ms including waiting) Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container default-http-backend Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container default-http-backend Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-655gf Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99-655gf: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-655gf Jan 28 18:09:59.975: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 28 18:09:59.975: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 28 18:09:59.975: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 28 18:09:59.975: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 28 18:09:59.975: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 28 18:09:59.975: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 28 18:09:59.975: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-5zpds to bootstrap-e2e-minion-group-wdrf Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 757.400692ms (757.415662ms including waiting) Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container metadata-proxy Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container metadata-proxy Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.591061119s (1.591069718s including waiting) Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container prometheus-to-sd-exporter Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container prometheus-to-sd-exporter Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container metadata-proxy Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container metadata-proxy Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container prometheus-to-sd-exporter Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-5zpds: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container prometheus-to-sd-exporter Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-77zds: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-77zds to bootstrap-e2e-master Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 949.395401ms (949.403046ms including waiting) Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 18:09:59.975: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.936540468s (1.936556682s including waiting) Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-77zds: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-br5vs to bootstrap-e2e-minion-group-sxb0 Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 802.34377ms (802.403929ms including waiting) Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container metadata-proxy Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container metadata-proxy Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.988933073s (1.988958398s including waiting) Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container metadata-proxy Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container metadata-proxy Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Created: Created container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-br5vs: {kubelet bootstrap-e2e-minion-group-sxb0} Started: Started container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-m8bfq to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 768.631082ms (768.65504ms including waiting) Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container metadata-proxy Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container metadata-proxy Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.779258021s (1.779273981s including waiting) Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container prometheus-to-sd-exporter Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {kubelet bootstrap-e2e-minion-group-hh49} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1-m8bfq: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-m8bfq Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-5zpds Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-br5vs Jan 28 18:09:59.976: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-77zds Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-ns4h4 to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.266660795s (2.266668719s including waiting) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.177425302s (1.177442205s including waiting) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-6764bf875c-ns4h4_kube-system(ec05df0e-80d9-4c50-934f-51a6c70162e5) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c-ns4h4: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-6764bf875c-ns4h4_kube-system(ec05df0e-80d9-4c50-934f-51a6c70162e5) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-ns4h4 Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-ns4h4 Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-459jr to bootstrap-e2e-minion-group-wdrf Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.32336706s (1.323386564s including waiting) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 947.438414ms (947.452201ms including waiting) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Killing: Stopping container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Killing: Stopping container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Readiness probe failed: Get "https://10.64.1.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Liveness probe failed: Get "https://10.64.1.4:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-459jr_kube-system(0b875c70-ea77-4ce6-89d4-b06d714cae18) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Readiness probe failed: Get "https://10.64.1.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-459jr Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container metrics-server Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Created: Created container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Started: Started container metrics-server-nanny Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Readiness probe failed: Get "https://10.64.1.15:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Liveness probe failed: Get "https://10.64.1.15:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9-459jr: {kubelet bootstrap-e2e-minion-group-wdrf} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-459jr Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 28 18:09:59.976: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-hh49 Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.452024678s (2.452032898s including waiting) Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} Created: Created container volume-snapshot-controller Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} Started: Started container volume-snapshot-controller Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} Killing: Stopping container volume-snapshot-controller Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hh49} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(89171dd3-bbcb-4863-8db1-bf282b44eb66) Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 28 18:09:59.976: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:09:59.976 (61ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:09:59.976 Jan 28 18:09:59.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 28 18:10:00.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:02.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:04.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:06.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:08.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:10.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:12.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:14.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:16.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:18.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:20.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:22.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:24.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:26.075: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:28.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:30.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:32.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:34.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:36.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:38.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:40.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:42.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:44.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:46.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:48.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:50.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:52.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:54.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:56.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:10:58.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:00.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:02.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:04.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:06.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:08.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:10.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:12.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:14.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:16.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:18.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:20.120: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:22.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:24.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:26.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:28.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:30.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:32.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:34.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:36.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:38.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:40.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:42.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:44.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:46.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:48.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:50.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:52.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:54.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:56.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:11:58.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:12:00.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:06:08 +0000 UTC}]. Failure Jan 28 18:12:02.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 18:04:58 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 18:12:01 +0000 UTC}]. Failure Jan 28 18:12:04.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 18:12:01 +0000 UTC}]. Failure Jan 28 18:12:06.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-28 18:12:01 +0000 UTC}]. Failure < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:12:08.076 (2m8.1s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 18:12:08.076 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 18:12:08.076 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:12:08.076 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:12:08.076 STEP: Collecting events from namespace "reboot-4450". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 18:12:08.076 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/28/23 18:12:08.127 Jan 28 18:12:08.189: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 18:12:08.189: INFO: Jan 28 18:12:08.239: INFO: Logging node info for node bootstrap-e2e-master Jan 28 18:12:08.290: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 09bb0353-b867-43a6-9f64-6e45f9c4aeb9 2751 0 2023-01-28 17:48:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 17:48:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-28 17:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-28 17:49:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-28 18:10:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-serial-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858378752 0} {<nil>} 3767948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596234752 0} {<nil>} 3511948Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 17:49:15 +0000 UTC,LastTransitionTime:2023-01-28 17:49:15 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 18:10:25 +0000 UTC,LastTransitionTime:2023-01-28 17:48:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 18:10:25 +0000 UTC,LastTransitionTime:2023-01-28 17:48:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 18:10:25 +0000 UTC,LastTransitionTime:2023-01-28 17:48:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 18:10:25 +0000 UTC,LastTransitionTime:2023-01-28 17:49:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.247.33.232,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-serial-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-serial-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ceaf667f6b5e1324cd116eb2db802512,SystemUUID:ceaf667f-6b5e-1324-cd11-6eb2db802512,BootID:79f7efc7-8b19-44a9-8ebd-59b6af441d89,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3,KubeletVersion:v1.27.0-alpha.1.69+d7cb1c54a540c9,KubeProxyVersion:v1.27.0-alpha.1.69+d7cb1c54a540c9,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9],SizeBytes:125274937,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9],SizeBytes:57551160,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 18:12:08.291: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 28 18:12:08.375: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 28 18:12:08.530: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-28 17:48:12 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container kube-apiserver ready: true, restart count 4 Jan 28 18:12:08.530: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-28 17:48:12 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container kube-scheduler ready: true, restart count 5 Jan 28 18:12:08.530: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-28 17:48:12 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container konnectivity-server-container ready: true, restart count 4 Jan 28 18:12:08.530: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-28 17:48:12 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container kube-controller-manager ready: true, restart count 7 Jan 28 18:12:08.530: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-28 17:48:12 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container etcd-container ready: true, restart count 3 Jan 28 18:12:08.530: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-28 17:48:12 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container etcd-container ready: true, restart count 2 Jan 28 18:12:08.530: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-28 17:48:31 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container kube-addon-manager ready: true, restart count 4 Jan 28 18:12:08.530: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-28 17:48:31 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:08.530: INFO: Container l7-lb-controller ready: true, restart count 6 Jan 28 18:12:08.530: INFO: metadata-proxy-v0.1-77zds started at 2023-01-28 17:49:13 +0000 UTC (0+2 container statuses recorded) Jan 28 18:12:08.530: INFO: Container metadata-proxy ready: true, restart count 0 Jan 28 18:12:08.530: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 28 18:12:08.778: INFO: Latency metrics for node bootstrap-e2e-master Jan 28 18:12:08.778: INFO: Logging node info for node bootstrap-e2e-minion-group-hh49 Jan 28 18:12:08.822: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-hh49 b58c3df1-874b-46d3-a80a-d5aa409735f5 2935 0 2023-01-28 17:48:55 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-hh49 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-28 17:48:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-28 18:04:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-28 18:12:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-28 18:12:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-28 18:12:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-serial-1-2/us-west1-b/bootstrap-e2e-minion-group-hh49,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-28 18:12:07 +0000 UTC,LastTransitionTime:2023-01-28 18:07:05 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-28 18:12:07 +0000 UTC,LastTransitionTime:2023-01-28 18:07:05 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-28 18:12:07 +0000 UTC,LastTransitionTime:2023-01-28 18:07:06 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-28 18:12:07 +0000 UTC,LastTransitionTime:2023-01-28 18:07:06 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-28 18:12:07 +0000 UTC,LastTransitionTime:2023-01-28 18:07:06 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-28 18:12:07 +0000 UTC,LastTransitionTime:2023-01-28 18:07:05 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-28 18:12:07 +0000 UTC,LastTransitionTime:2023-01-28 18:07:06 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-28 17:49:04 +0000 UTC,LastTransitionTime:2023-01-28 17:49:04 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-28 18:12:02 +0000 UTC,LastTransitionTime:2023-01-28 18:12:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-28 18:12:02 +0000 UTC,LastTransitionTime:2023-01-28 18:12:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-28 18:12:02 +0000 UTC,LastTransitionTime:2023-01-28 18:12:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-28 18:12:02 +0000 UTC,LastTransitionTime:2023-01-28 18:12:02 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.65.26,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-hh49.c.k8s-jkns-e2e-gce-serial-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-hh49.c.k8s-jkns-e2e-gce-serial-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4f73fe47b8cfe6109fc20d7e94c98130,SystemUUID:4f73fe47-b8cf-e610-9fc2-0d7e94c98130,BootID:ff674672-75de-440b-9579-1045113effda,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3,KubeletVersion:v1.27.0-alpha.1.69+d7cb1c54a540c9,KubeProxyVersion:v1.27.0-alpha.1.69+d7cb1c54a540c9,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.69_d7cb1c54a540c9],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 18:12:08.823: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-hh49 Jan 28 18:12:08.886: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-hh49 Jan 28 18:12:09.388: INFO: l7-default-backend-8549d69d99-655gf started at 2023-01-28 17:49:04 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:09.388: INFO: Container default-http-backend ready: false, restart count 1 Jan 28 18:12:09.388: INFO: metadata-proxy-v0.1-m8bfq started at 2023-01-28 17:48:56 +0000 UTC (0+2 container statuses recorded) Jan 28 18:12:09.388: INFO: Container metadata-proxy ready: true, restart count 1 Jan 28 18:12:09.388: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 28 18:12:09.388: INFO: volume-snapshot-controller-0 started at 2023-01-28 17:49:04 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:09.388: INFO: Container volume-snapshot-controller ready: true, restart count 8 Jan 28 18:12:09.388: INFO: coredns-6846b5b5f-khvz4 started at 2023-01-28 17:49:04 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:09.388: INFO: Container coredns ready: false, restart count 4 Jan 28 18:12:09.388: INFO: kube-dns-autoscaler-5f6455f985-bk5tm started at 2023-01-28 17:49:04 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:09.388: INFO: Container autoscaler ready: false, restart count 4 Jan 28 18:12:09.388: INFO: konnectivity-agent-d8pzk started at 2023-01-28 17:49:04 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:09.388: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 28 18:12:09.388: INFO: coredns-6846b5b5f-57g9r started at 2023-01-28 17:49:12 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:09.388: INFO: Container coredns ready: false, restart count 6 Jan 28 18:12:09.388: INFO: kube-proxy-bootstrap-e2e-minion-group-hh49 started at 2023-01-28 17:48:55 +0000 UTC (0+1 container statuses recorded) Jan 28 18:12:09.388: INFO: Container kube-proxy ready: true, restart count 6 Jan 28 18:12:45.801: INFO: Latency metrics for node bootstrap-e2e-minion-group-hh49 Jan 28 18:12:45.801: INFO: Logging node info for node bootstrap-e2e-minion-group-sxb0 Jan 28 18:12:45.840: INFO: Error getting node info Get "https://35.247.33.232/api/v1/nodes/bootstrap-e2e-minion-group-sxb0": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:12:45.840: INFO: Node Info: &Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 18:12:45.840: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-sxb0 Jan 28 18:12:45.880: INFO: Unexpected error retrieving node events Get "https://35.247.33.232/api/v1/namespaces/kube-system/events?fieldSelector=involvedObject.kind%3DNode%2CinvolvedObject.namespace%3D%2Csource%3Dkubelet%2CinvolvedObject.name%3Dbootstrap-e2e-minion-group-sxb0": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:12:45.880: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-sxb0 Jan 28 18:12:45.919: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-sxb0: Get "https://35.247.33.232/api/v1/nodes/bootstrap-e2e-minion-group-sxb0:10250/proxy/pods": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:12:45.919: INFO: Logging node info for node bootstrap-e2e-minion-group-wdrf Jan 28 18:12:45.959: INFO: Error getting node info Get "https://35.247.33.232/api/v1/nodes/bootstrap-e2e-minion-group-wdrf": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:12:45.959: INFO: Node Info: &Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 28 18:12:45.959: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-wdrf Jan 28 18:12:45.998: INFO: Unexpected error retrieving node events Get "https://35.247.33.232/api/v1/namespaces/kube-system/events?fieldSelector=involvedObject.kind%3DNode%2CinvolvedObject.namespace%3D%2Csource%3Dkubelet%2CinvolvedObject.name%3Dbootstrap-e2e-minion-group-wdrf": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:12:45.998: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-wdrf Jan 28 18:12:46.038: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-wdrf: Get "https://35.247.33.232/api/v1/nodes/bootstrap-e2e-minion-group-wdrf:10250/proxy/pods": dial tcp 35.247.33.232:443: connect: connection refused END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:12:46.038 (37.962s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:12:46.038 (37.962s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:12:46.038 STEP: Destroying namespace "reboot-4450" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 18:12:46.038 [FAILED] Couldn't delete ns: "reboot-4450": Delete "https://35.247.33.232/api/v1/namespaces/reboot-4450": dial tcp 35.247.33.232:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.247.33.232/api/v1/namespaces/reboot-4450", Err:(*net.OpError)(0xc00342c370)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/28/23 18:12:46.078 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:12:46.078 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:12:46.078 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:12:46.078 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 18:01:51.15 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:01:21.029 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:01:21.029 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:01:21.029 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 18:01:21.029 Jan 28 18:01:21.029: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 18:01:21.03 Jan 28 18:01:21.070: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:23.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:25.111: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:27.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:29.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:31.111: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:33.109: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:35.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:37.111: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:39.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:41.111: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:43.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:45.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:47.109: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:49.109: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:51.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:51.150: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:51.150: INFO: Unexpected error: <*errors.errorString | 0xc000205c80>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 18:01:51.15 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:01:51.15 (30.121s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:01:51.15 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 18:01:51.15 Jan 28 18:01:51.190: INFO: Unexpected error: <*url.Error | 0xc003bfc570>: { Op: "Get", URL: "https://35.247.33.232/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc0016e6230>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003ed46c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 247, 33, 232], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0001c8c40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.247.33.232/api/v1/namespaces/kube-system/events": dial tcp 35.247.33.232:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 18:01:51.19 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:01:51.19 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:01:51.19 Jan 28 18:01:51.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:01:51.229 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:01:51.229 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:01:51.229 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:01:51.229 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:01:51.23 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:01:51.23 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:01:51.23 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:01:51.23 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:01:51.23 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 18:01:51.15 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:01:21.029 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:01:21.029 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:01:21.029 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 18:01:21.029 Jan 28 18:01:21.029: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 18:01:21.03 Jan 28 18:01:21.070: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:23.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:25.111: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:27.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:29.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:31.111: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:33.109: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:35.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:37.111: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:39.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:41.111: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:43.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:45.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:47.109: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:49.109: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:51.110: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:51.150: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:01:51.150: INFO: Unexpected error: <*errors.errorString | 0xc000205c80>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 18:01:51.15 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:01:51.15 (30.121s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:01:51.15 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 18:01:51.15 Jan 28 18:01:51.190: INFO: Unexpected error: <*url.Error | 0xc003bfc570>: { Op: "Get", URL: "https://35.247.33.232/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc0016e6230>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003ed46c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 247, 33, 232], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0001c8c40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.247.33.232/api/v1/namespaces/kube-system/events": dial tcp 35.247.33.232:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 18:01:51.19 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:01:51.19 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:01:51.19 Jan 28 18:01:51.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:01:51.229 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:01:51.229 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:01:51.229 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:01:51.229 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:01:51.23 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:01:51.23 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:01:51.23 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:01:51.23 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:01:51.23 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sswitching\soff\sthe\snetwork\sinterface\sand\sensure\sthey\sfunction\supon\sswitch\son$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 18:00:20.283 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 17:54:34.003 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 17:54:34.003 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 17:54:34.003 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 17:54:34.003 Jan 28 17:54:34.003: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 17:54:34.004 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 17:54:34.134 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 17:54:34.217 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 17:54:34.3 (297ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 17:54:34.3 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 17:54:34.3 (0s) > Enter [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/28/23 17:54:34.3 Jan 28 17:54:34.394: INFO: Getting bootstrap-e2e-minion-group-sxb0 Jan 28 17:54:34.394: INFO: Getting bootstrap-e2e-minion-group-wdrf Jan 28 17:54:34.394: INFO: Getting bootstrap-e2e-minion-group-hh49 Jan 28 17:54:34.470: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-wdrf condition Ready to be true Jan 28 17:54:34.470: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hh49 condition Ready to be true Jan 28 17:54:34.470: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-sxb0 condition Ready to be true Jan 28 17:54:34.514: INFO: Node bootstrap-e2e-minion-group-hh49 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-bk5tm kube-proxy-bootstrap-e2e-minion-group-hh49 metadata-proxy-v0.1-m8bfq volume-snapshot-controller-0] Jan 28 17:54:34.514: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-bk5tm kube-proxy-bootstrap-e2e-minion-group-hh49 metadata-proxy-v0.1-m8bfq volume-snapshot-controller-0] Jan 28 17:54:34.514: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.515: INFO: Node bootstrap-e2e-minion-group-wdrf has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5zpds" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.515: INFO: Node bootstrap-e2e-minion-group-sxb0 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-br5vs" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-bk5tm" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hh49" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-m8bfq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.565: INFO: Pod "kube-dns-autoscaler-5f6455f985-bk5tm": Phase="Running", Reason="", readiness=true. Elapsed: 50.252252ms Jan 28 17:54:34.565: INFO: Pod "kube-dns-autoscaler-5f6455f985-bk5tm" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.566: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 51.712592ms Jan 28 17:54:34.566: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.567: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=true. Elapsed: 52.129468ms Jan 28 17:54:34.567: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.567: INFO: Pod "metadata-proxy-v0.1-m8bfq": Phase="Running", Reason="", readiness=true. Elapsed: 52.312121ms Jan 28 17:54:34.567: INFO: Pod "metadata-proxy-v0.1-m8bfq" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.567: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hh49": Phase="Running", Reason="", readiness=true. Elapsed: 52.423325ms Jan 28 17:54:34.567: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hh49" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.567: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-bk5tm kube-proxy-bootstrap-e2e-minion-group-hh49 metadata-proxy-v0.1-m8bfq volume-snapshot-controller-0] Jan 28 17:54:34.567: INFO: Getting external IP address for bootstrap-e2e-minion-group-hh49 Jan 28 17:54:34.567: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-hh49(34.168.65.26:22) Jan 28 17:54:34.568: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0": Phase="Running", Reason="", readiness=true. Elapsed: 52.60143ms Jan 28 17:54:34.568: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.568: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=true. Elapsed: 53.123157ms Jan 28 17:54:34.568: INFO: Pod "metadata-proxy-v0.1-5zpds" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.568: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 17:54:34.568: INFO: Getting external IP address for bootstrap-e2e-minion-group-wdrf Jan 28 17:54:34.568: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-wdrf(34.168.17.115:22) Jan 28 17:54:34.568: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=true. Elapsed: 53.059159ms Jan 28 17:54:34.568: INFO: Pod "metadata-proxy-v0.1-br5vs" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.568: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 17:54:34.568: INFO: Getting external IP address for bootstrap-e2e-minion-group-sxb0 Jan 28 17:54:34.568: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-sxb0(35.197.97.48:22) Jan 28 17:54:35.078: INFO: ssh prow@34.168.65.26:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 28 17:54:35.078: INFO: ssh prow@34.168.65.26:22: stdout: "" Jan 28 17:54:35.078: INFO: ssh prow@34.168.65.26:22: stderr: "" Jan 28 17:54:35.078: INFO: ssh prow@34.168.65.26:22: exit code: 0 Jan 28 17:54:35.078: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-hh49 condition Ready to be false Jan 28 17:54:35.086: INFO: ssh prow@34.168.17.115:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 28 17:54:35.086: INFO: ssh prow@35.197.97.48:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 28 17:54:35.086: INFO: ssh prow@34.168.17.115:22: stdout: "" Jan 28 17:54:35.086: INFO: ssh prow@34.168.17.115:22: stderr: "" Jan 28 17:54:35.086: INFO: ssh prow@35.197.97.48:22: stdout: "" Jan 28 17:54:35.086: INFO: ssh prow@35.197.97.48:22: stderr: "" Jan 28 17:54:35.086: INFO: ssh prow@34.168.17.115:22: exit code: 0 Jan 28 17:54:35.086: INFO: ssh prow@35.197.97.48:22: exit code: 0 Jan 28 17:54:35.086: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-wdrf condition Ready to be false Jan 28 17:54:35.086: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-sxb0 condition Ready to be false Jan 28 17:54:35.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:35.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:35.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:37.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:37.173: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:37.173: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:39.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:39.219: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:39.219: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:41.255: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:41.264: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:41.264: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:43.298: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:43.308: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:43.308: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:45.347: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:45.354: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:45.354: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:47.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:47.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:47.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:49.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:49.441: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:49.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:51.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:51.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:51.487: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:53.521: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:53.528: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:53.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:55.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:55.571: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:55.577: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:57.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:57.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:57.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:59.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:59.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:59.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:01.745: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:01.745: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:01.745: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:03.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:03.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:03.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:05.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:05.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:05.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:07.897: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:07.897: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:07.897: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:09.942: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:09.942: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:09.942: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:11.987: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:11.987: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:11.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:14.032: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:14.033: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:14.034: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:16.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:16.077: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:16.079: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:18.119: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:18.122: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:18.123: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:20.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:20.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:20.200: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-hh49 condition Ready to be true Jan 28 17:55:20.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:22.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:22.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:22.298: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:24.290: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:24.290: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:24.341: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:26.334: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-sxb0 condition Ready to be true Jan 28 17:55:26.334: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-wdrf condition Ready to be true Jan 28 17:55:26.379: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:26.379: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:26.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:28.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:28.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:28.427: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:30.470: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:30.470: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:30.473: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:32.516: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:32.516: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:32.517: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:34.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:34.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:34.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:36.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:36.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:36.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:38.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:38.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:38.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:40.704: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:40.707: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:40.707: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:42.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:42.753: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:42.753: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:44.791: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:44.799: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:44.799: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:46.835: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:46.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:46.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:48.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:48.891: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:48.892: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:50.920: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:50.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:50.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:52.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:52.984: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:52.984: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:55.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:55.030: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:55.030: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:57.051: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:57.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:57.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:59.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:59.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:59.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:01.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:01.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:01.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:03.184: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:03.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:03.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:05.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:05.260: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:05.260: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:07.269: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:07.305: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:07.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:09.314: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:09.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:09.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:11.358: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:11.392: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:11.394: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:13.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:13.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:13.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:15.446: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:15.489: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:15.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:17.489: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:17.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:17.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:19.532: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:19.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:19.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:21.575: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:21.627: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:21.627: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:23.615: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:23.667: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:23.667: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:25.655: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:25.707: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:25.707: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:27.696: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:27.748: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:27.748: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:29.737: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:29.788: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:29.788: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:31.777: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:31.829: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:31.829: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:33.817: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:33.868: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:33.868: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:35.857: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:35.909: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:35.909: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:37.897: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:37.949: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:37.949: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:39.938: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:39.990: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:39.990: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:41.978: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:42.030: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:42.030: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:44.018: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:44.070: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:44.070: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:46.058: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:46.110: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:46.110: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:48.098: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:48.149: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:48.149: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:50.139: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:50.189: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:50.189: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:52.180: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:52.230: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:52.230: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:57:03.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:03.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:57:03.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:57:05.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:05.639: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 17:57:05.639: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-br5vs" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:57:05.639: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 17:57:05.639: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5zpds" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:57:05.639: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:57:05.639: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:57:05.688: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0": Phase="Running", Reason="", readiness=true. Elapsed: 49.290712ms Jan 28 17:57:05.688: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" satisfied condition "running and ready, or succeeded" Jan 28 17:57:05.688: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 49.531083ms Jan 28 17:57:05.688: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:05.688: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 49.480266ms Jan 28 17:57:05.688: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:05.689: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=false. Elapsed: 49.989942ms Jan 28 17:57:05.689: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-br5vs' on 'bootstrap-e2e-minion-group-sxb0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:58 +0000 UTC }] Jan 28 17:57:07.682: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:07.735: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 2.095586589s Jan 28 17:57:07.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:07.735: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=false. Elapsed: 2.096158391s Jan 28 17:57:07.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-br5vs' on 'bootstrap-e2e-minion-group-sxb0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:58 +0000 UTC }] Jan 28 17:57:07.735: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 2.095959118s Jan 28 17:57:07.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:09.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:09.736: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=false. Elapsed: 4.097790515s Jan 28 17:57:09.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-br5vs' on 'bootstrap-e2e-minion-group-sxb0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:58 +0000 UTC }] Jan 28 17:57:09.736: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 4.097650195s Jan 28 17:57:09.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:09.737: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 4.097621669s Jan 28 17:57:09.737: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:11.803: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:11.803: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=false. Elapsed: 6.164088983s Jan 28 17:57:11.803: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-br5vs' on 'bootstrap-e2e-minion-group-sxb0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:58 +0000 UTC }] Jan 28 17:57:11.803: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 6.163980836s Jan 28 17:57:11.803: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:11.803: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 6.163910639s Jan 28 17:57:11.803: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:13.735: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 8.096109483s Jan 28 17:57:13.735: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 8.095961492s Jan 28 17:57:13.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:13.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:13.735: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=false. Elapsed: 8.096568931s Jan 28 17:57:13.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-br5vs' on 'bootstrap-e2e-minion-group-sxb0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:58 +0000 UTC }] Jan 28 17:57:13.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:15.736: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 10.096562986s Jan 28 17:57:15.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:15.736: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=false. Elapsed: 10.097063438s Jan 28 17:57:15.736: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 10.096795836s Jan 28 17:57:15.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:15.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-br5vs' on 'bootstrap-e2e-minion-group-sxb0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:58 +0000 UTC }] Jan 28 17:57:16.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:17.736: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 12.09684301s Jan 28 17:57:17.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:17.736: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 12.097087143s Jan 28 17:57:17.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:17.736: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=false. Elapsed: 12.097590365s Jan 28 17:57:17.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-br5vs' on 'bootstrap-e2e-minion-group-sxb0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:58 +0000 UTC }] Jan 28 17:57:18.044: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:19.735: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 14.095738058s Jan 28 17:57:19.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:19.735: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 14.095753547s Jan 28 17:57:19.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:19.735: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=true. Elapsed: 14.096287184s Jan 28 17:57:19.735: INFO: Pod "metadata-proxy-v0.1-br5vs" satisfied condition "running and ready, or succeeded" Jan 28 17:57:19.735: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 17:57:19.735: INFO: Reboot successful on node bootstrap-e2e-minion-group-sxb0 Jan 28 17:57:20.087: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:21.733: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 16.093668676s Jan 28 17:57:21.733: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:21.733: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 16.094058255s Jan 28 17:57:21.733: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:22.133: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:23.750: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 18.111033507s Jan 28 17:57:23.750: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:23.751: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 18.111946844s Jan 28 17:57:23.751: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:24.177: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:25.732: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 20.093484376s Jan 28 17:57:25.733: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 20.093672206s Jan 28 17:57:25.733: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:25.733: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:26.221: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:27.732: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=true. Elapsed: 22.09343497s Jan 28 17:57:27.732: INFO: Pod "metadata-proxy-v0.1-5zpds" satisfied condition "running and ready, or succeeded" Jan 28 17:57:27.732: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=true. Elapsed: 22.093440188s Jan 28 17:57:27.732: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" satisfied condition "running and ready, or succeeded" Jan 28 17:57:27.732: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 17:57:27.732: INFO: Reboot successful on node bootstrap-e2e-minion-group-wdrf Jan 28 17:57:28.264: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:30.312: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:32.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:34.399: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:36.443: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:38.486: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:40.529: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:42.573: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:44.618: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:46.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:48.710: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:50.756: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:52.800: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:54.843: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:56.886: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:58.940: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:00.983: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:03.027: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:05.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:07.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:09.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:11.205: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:13.249: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:15.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:17.333: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:19.374: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:21.415: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:23.455: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:25.494: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:27.535: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:29.575: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:31.615: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:33.656: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:35.697: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:37.736: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:39.776: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:41.816: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:43.856: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:45.896: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:47.936: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:49.977: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:52.017: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:54.057: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:56.097: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:58.137: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:00.178: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:02.218: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:04.257: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:06.297: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:08.337: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:10.378: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:12.418: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:19.758: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:21.798: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:23.838: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:25.878: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:27.919: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:29.960: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:32.001: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:34.041: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on (Spec Runtime: 5m0.298s) test/e2e/cloud/gcp/reboot.go:115 In [It] (Node Runtime: 5m0.001s) test/e2e/cloud/gcp/reboot.go:115 Spec Goroutine goroutine 1701 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc0013196b0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc002302800}, {0x8146f48?, 0xc001cf4340}, {0x7903d4e, 0x21e}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.6({0x7f6080dbf098?, 0xc002302800?}) test/e2e/cloud/gcp/reboot.go:133 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc002302800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 1703 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc002302800}, {0x8146f48, 0xc001cf4340}, {0xc000eaa680, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc002302800}, {0x8146f48, 0xc001cf4340}, {0x7ffeb48ed5ea, 0x3}, {0xc000eaa680, 0x1f}, {0x7903d4e, 0x21e}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 17:59:36.081: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:38.122: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:40.163: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:42.202: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:44.242: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:46.282: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:48.321: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:50.361: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:52.402: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on (Spec Runtime: 5m20.3s) test/e2e/cloud/gcp/reboot.go:115 In [It] (Node Runtime: 5m20.003s) test/e2e/cloud/gcp/reboot.go:115 Spec Goroutine goroutine 1701 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc0013196b0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc002302800}, {0x8146f48?, 0xc001cf4340}, {0x7903d4e, 0x21e}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.6({0x7f6080dbf098?, 0xc002302800?}) test/e2e/cloud/gcp/reboot.go:133 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc002302800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 1703 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc002302800}, {0x8146f48, 0xc001cf4340}, {0xc000eaa680, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc002302800}, {0x8146f48, 0xc001cf4340}, {0x7ffeb48ed5ea, 0x3}, {0xc000eaa680, 0x1f}, {0x7903d4e, 0x21e}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 17:59:54.442: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:56.481: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:58.521: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 18:00:00.562: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 18:00:02.602: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 18:00:04.643: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 18:00:12.152: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 18:00:14.203: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on (Spec Runtime: 5m40.301s) test/e2e/cloud/gcp/reboot.go:115 In [It] (Node Runtime: 5m40.004s) test/e2e/cloud/gcp/reboot.go:115 Spec Goroutine goroutine 1701 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc0013196b0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc002302800}, {0x8146f48?, 0xc001cf4340}, {0x7903d4e, 0x21e}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.6({0x7f6080dbf098?, 0xc002302800?}) test/e2e/cloud/gcp/reboot.go:133 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc002302800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 1703 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc002302800}, {0x8146f48, 0xc001cf4340}, {0xc000eaa680, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc002302800}, {0x8146f48, 0xc001cf4340}, {0x7ffeb48ed5ea, 0x3}, {0xc000eaa680, 0x1f}, {0x7903d4e, 0x21e}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 18:00:16.243: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 18:00:18.283: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 18:00:20.283: INFO: Node bootstrap-e2e-minion-group-hh49 didn't reach desired Ready condition status (true) within 5m0s Jan 28 18:00:20.283: INFO: Node bootstrap-e2e-minion-group-hh49 failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 18:00:20.283 < Exit [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/28/23 18:00:20.283 (5m45.983s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:00:20.283 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 18:00:20.284 Jan 28 18:00:20.323: INFO: Unexpected error: <*url.Error | 0xc0041ee030>: { Op: "Get", URL: "https://35.247.33.232/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc001cda3c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003b291d0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 247, 33, 232], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003790000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.247.33.232/api/v1/namespaces/kube-system/events": dial tcp 35.247.33.232:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 18:00:20.323 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:00:20.323 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:00:20.323 Jan 28 18:00:20.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:00:20.362 (39ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 18:00:20.363 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 18:00:20.363 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:00:20.363 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:00:20.363 STEP: Collecting events from namespace "reboot-9141". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 18:00:20.363 Jan 28 18:00:20.402: INFO: Unexpected error: failed to list events in namespace "reboot-9141": <*url.Error | 0xc003b29200>: { Op: "Get", URL: "https://35.247.33.232/api/v1/namespaces/reboot-9141/events", Err: <*net.OpError | 0xc001452e10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0014ab470>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 247, 33, 232], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002f62420>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:00:20.403 (40ms) [FAILED] failed to list events in namespace "reboot-9141": Get "https://35.247.33.232/api/v1/namespaces/reboot-9141/events": dial tcp 35.247.33.232:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/28/23 18:00:20.403 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:00:20.403 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:00:20.403 STEP: Destroying namespace "reboot-9141" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 18:00:20.403 [FAILED] Couldn't delete ns: "reboot-9141": Delete "https://35.247.33.232/api/v1/namespaces/reboot-9141": dial tcp 35.247.33.232:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.247.33.232/api/v1/namespaces/reboot-9141", Err:(*net.OpError)(0xc001cdaa50)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/28/23 18:00:20.442 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:00:20.442 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:00:20.443 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:00:20.443 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sswitching\soff\sthe\snetwork\sinterface\sand\sensure\sthey\sfunction\supon\sswitch\son$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 18:00:20.283 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 17:54:34.003 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 17:54:34.003 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 17:54:34.003 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 17:54:34.003 Jan 28 17:54:34.003: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 17:54:34.004 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/28/23 17:54:34.134 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/28/23 17:54:34.217 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 17:54:34.3 (297ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 17:54:34.3 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/28/23 17:54:34.3 (0s) > Enter [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/28/23 17:54:34.3 Jan 28 17:54:34.394: INFO: Getting bootstrap-e2e-minion-group-sxb0 Jan 28 17:54:34.394: INFO: Getting bootstrap-e2e-minion-group-wdrf Jan 28 17:54:34.394: INFO: Getting bootstrap-e2e-minion-group-hh49 Jan 28 17:54:34.470: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-wdrf condition Ready to be true Jan 28 17:54:34.470: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hh49 condition Ready to be true Jan 28 17:54:34.470: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-sxb0 condition Ready to be true Jan 28 17:54:34.514: INFO: Node bootstrap-e2e-minion-group-hh49 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-bk5tm kube-proxy-bootstrap-e2e-minion-group-hh49 metadata-proxy-v0.1-m8bfq volume-snapshot-controller-0] Jan 28 17:54:34.514: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-bk5tm kube-proxy-bootstrap-e2e-minion-group-hh49 metadata-proxy-v0.1-m8bfq volume-snapshot-controller-0] Jan 28 17:54:34.514: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.515: INFO: Node bootstrap-e2e-minion-group-wdrf has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5zpds" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.515: INFO: Node bootstrap-e2e-minion-group-sxb0 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-br5vs" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-bk5tm" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hh49" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-m8bfq" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.515: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:54:34.565: INFO: Pod "kube-dns-autoscaler-5f6455f985-bk5tm": Phase="Running", Reason="", readiness=true. Elapsed: 50.252252ms Jan 28 17:54:34.565: INFO: Pod "kube-dns-autoscaler-5f6455f985-bk5tm" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.566: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 51.712592ms Jan 28 17:54:34.566: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.567: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=true. Elapsed: 52.129468ms Jan 28 17:54:34.567: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.567: INFO: Pod "metadata-proxy-v0.1-m8bfq": Phase="Running", Reason="", readiness=true. Elapsed: 52.312121ms Jan 28 17:54:34.567: INFO: Pod "metadata-proxy-v0.1-m8bfq" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.567: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hh49": Phase="Running", Reason="", readiness=true. Elapsed: 52.423325ms Jan 28 17:54:34.567: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hh49" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.567: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-bk5tm kube-proxy-bootstrap-e2e-minion-group-hh49 metadata-proxy-v0.1-m8bfq volume-snapshot-controller-0] Jan 28 17:54:34.567: INFO: Getting external IP address for bootstrap-e2e-minion-group-hh49 Jan 28 17:54:34.567: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-hh49(34.168.65.26:22) Jan 28 17:54:34.568: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0": Phase="Running", Reason="", readiness=true. Elapsed: 52.60143ms Jan 28 17:54:34.568: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.568: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=true. Elapsed: 53.123157ms Jan 28 17:54:34.568: INFO: Pod "metadata-proxy-v0.1-5zpds" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.568: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 17:54:34.568: INFO: Getting external IP address for bootstrap-e2e-minion-group-wdrf Jan 28 17:54:34.568: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-wdrf(34.168.17.115:22) Jan 28 17:54:34.568: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=true. Elapsed: 53.059159ms Jan 28 17:54:34.568: INFO: Pod "metadata-proxy-v0.1-br5vs" satisfied condition "running and ready, or succeeded" Jan 28 17:54:34.568: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 17:54:34.568: INFO: Getting external IP address for bootstrap-e2e-minion-group-sxb0 Jan 28 17:54:34.568: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-sxb0(35.197.97.48:22) Jan 28 17:54:35.078: INFO: ssh prow@34.168.65.26:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 28 17:54:35.078: INFO: ssh prow@34.168.65.26:22: stdout: "" Jan 28 17:54:35.078: INFO: ssh prow@34.168.65.26:22: stderr: "" Jan 28 17:54:35.078: INFO: ssh prow@34.168.65.26:22: exit code: 0 Jan 28 17:54:35.078: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-hh49 condition Ready to be false Jan 28 17:54:35.086: INFO: ssh prow@34.168.17.115:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 28 17:54:35.086: INFO: ssh prow@35.197.97.48:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 28 17:54:35.086: INFO: ssh prow@34.168.17.115:22: stdout: "" Jan 28 17:54:35.086: INFO: ssh prow@34.168.17.115:22: stderr: "" Jan 28 17:54:35.086: INFO: ssh prow@35.197.97.48:22: stdout: "" Jan 28 17:54:35.086: INFO: ssh prow@35.197.97.48:22: stderr: "" Jan 28 17:54:35.086: INFO: ssh prow@34.168.17.115:22: exit code: 0 Jan 28 17:54:35.086: INFO: ssh prow@35.197.97.48:22: exit code: 0 Jan 28 17:54:35.086: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-wdrf condition Ready to be false Jan 28 17:54:35.086: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-sxb0 condition Ready to be false Jan 28 17:54:35.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:35.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:35.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:37.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:37.173: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:37.173: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:39.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:39.219: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:39.219: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:41.255: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:41.264: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:41.264: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:43.298: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:43.308: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:43.308: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:45.347: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:45.354: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:45.354: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:47.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:47.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:47.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:49.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:49.441: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:49.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:51.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:51.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:51.487: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:53.521: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:53.528: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:53.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:55.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:55.571: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:55.577: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:57.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:57.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:57.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:59.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:59.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:54:59.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:01.745: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:01.745: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:01.745: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:03.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:03.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:03.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:05.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:05.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:05.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:07.897: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:07.897: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:07.897: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:09.942: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:09.942: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:09.942: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:11.987: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:11.987: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:11.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:14.032: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:14.033: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:14.034: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:16.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:16.077: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:16.079: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:18.119: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:18.122: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:18.123: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:20.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:20.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:20.200: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-hh49 condition Ready to be true Jan 28 17:55:20.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:22.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:22.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:22.298: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:24.290: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:24.290: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 28 17:55:24.341: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:26.334: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-sxb0 condition Ready to be true Jan 28 17:55:26.334: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-wdrf condition Ready to be true Jan 28 17:55:26.379: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:26.379: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:26.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:28.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:28.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:28.427: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:30.470: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:30.470: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:30.473: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:32.516: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:32.516: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:32.517: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:34.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:34.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:34.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:36.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:36.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:36.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:38.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:38.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:38.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:40.704: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:40.707: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:40.707: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:42.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:42.753: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:42.753: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:44.791: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:44.799: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:44.799: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:46.835: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:46.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:46.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:48.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:48.891: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:48.892: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:50.920: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:50.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:50.938: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:52.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:52.984: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:52.984: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:55.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:55.030: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:55.030: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:57.051: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:57.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:57.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:59.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:55:59.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:55:59.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:01.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:01.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:01.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:03.184: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:03.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:03.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:05.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:05.260: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:05.260: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:07.269: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:07.305: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:07.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:09.314: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:09.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:09.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:11.358: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:11.392: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:11.394: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:13.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:13.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:13.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:15.446: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:15.489: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:15.490: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:17.489: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:17.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:17.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:19.532: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:19.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:19.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:21.575: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:21.627: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:56:21.627: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:56:23.615: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:23.667: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:23.667: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:25.655: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:25.707: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:25.707: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:27.696: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:27.748: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:27.748: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:29.737: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:29.788: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:29.788: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:31.777: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:31.829: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:31.829: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:33.817: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:33.868: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:33.868: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:35.857: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:35.909: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:35.909: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:37.897: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:37.949: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:37.949: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:39.938: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:39.990: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:39.990: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:41.978: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:42.030: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:42.030: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:44.018: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:44.070: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:44.070: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:46.058: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:46.110: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:46.110: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:48.098: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:48.149: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:48.149: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:50.139: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:50.189: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:56:50.189: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:52.180: INFO: Couldn't get node bootstrap-e2e-minion-group-sxb0 Jan 28 17:56:52.230: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:56:52.230: INFO: Couldn't get node bootstrap-e2e-minion-group-wdrf Jan 28 17:57:03.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:03.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-sxb0 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:57:03.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-wdrf is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 28 17:57:05.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:05.639: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 17:57:05.639: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-br5vs" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:57:05.639: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 17:57:05.639: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5zpds" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:57:05.639: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:57:05.639: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" in namespace "kube-system" to be "running and ready, or succeeded" Jan 28 17:57:05.688: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0": Phase="Running", Reason="", readiness=true. Elapsed: 49.290712ms Jan 28 17:57:05.688: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-sxb0" satisfied condition "running and ready, or succeeded" Jan 28 17:57:05.688: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 49.531083ms Jan 28 17:57:05.688: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:05.688: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 49.480266ms Jan 28 17:57:05.688: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:05.689: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=false. Elapsed: 49.989942ms Jan 28 17:57:05.689: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-br5vs' on 'bootstrap-e2e-minion-group-sxb0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:58 +0000 UTC }] Jan 28 17:57:07.682: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:07.735: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 2.095586589s Jan 28 17:57:07.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:07.735: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=false. Elapsed: 2.096158391s Jan 28 17:57:07.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-br5vs' on 'bootstrap-e2e-minion-group-sxb0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:58 +0000 UTC }] Jan 28 17:57:07.735: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 2.095959118s Jan 28 17:57:07.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:09.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:09.736: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=false. Elapsed: 4.097790515s Jan 28 17:57:09.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-br5vs' on 'bootstrap-e2e-minion-group-sxb0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:58 +0000 UTC }] Jan 28 17:57:09.736: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 4.097650195s Jan 28 17:57:09.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:09.737: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 4.097621669s Jan 28 17:57:09.737: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:11.803: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:11.803: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=false. Elapsed: 6.164088983s Jan 28 17:57:11.803: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-br5vs' on 'bootstrap-e2e-minion-group-sxb0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:58 +0000 UTC }] Jan 28 17:57:11.803: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 6.163980836s Jan 28 17:57:11.803: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:11.803: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 6.163910639s Jan 28 17:57:11.803: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:13.735: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 8.096109483s Jan 28 17:57:13.735: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 8.095961492s Jan 28 17:57:13.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:13.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:13.735: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=false. Elapsed: 8.096568931s Jan 28 17:57:13.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-br5vs' on 'bootstrap-e2e-minion-group-sxb0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:58 +0000 UTC }] Jan 28 17:57:13.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:15.736: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 10.096562986s Jan 28 17:57:15.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:15.736: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=false. Elapsed: 10.097063438s Jan 28 17:57:15.736: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 10.096795836s Jan 28 17:57:15.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:15.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-br5vs' on 'bootstrap-e2e-minion-group-sxb0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:58 +0000 UTC }] Jan 28 17:57:16.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:17.736: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 12.09684301s Jan 28 17:57:17.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:17.736: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 12.097087143s Jan 28 17:57:17.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:17.736: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=false. Elapsed: 12.097590365s Jan 28 17:57:17.736: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-br5vs' on 'bootstrap-e2e-minion-group-sxb0' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:58 +0000 UTC }] Jan 28 17:57:18.044: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:19.735: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 14.095738058s Jan 28 17:57:19.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:19.735: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 14.095753547s Jan 28 17:57:19.735: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:19.735: INFO: Pod "metadata-proxy-v0.1-br5vs": Phase="Running", Reason="", readiness=true. Elapsed: 14.096287184s Jan 28 17:57:19.735: INFO: Pod "metadata-proxy-v0.1-br5vs" satisfied condition "running and ready, or succeeded" Jan 28 17:57:19.735: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-sxb0 metadata-proxy-v0.1-br5vs] Jan 28 17:57:19.735: INFO: Reboot successful on node bootstrap-e2e-minion-group-sxb0 Jan 28 17:57:20.087: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:21.733: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 16.093668676s Jan 28 17:57:21.733: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:21.733: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 16.094058255s Jan 28 17:57:21.733: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:22.133: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:23.750: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 18.111033507s Jan 28 17:57:23.750: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:23.751: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 18.111946844s Jan 28 17:57:23.751: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:24.177: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:25.732: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=false. Elapsed: 20.093484376s Jan 28 17:57:25.733: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=false. Elapsed: 20.093672206s Jan 28 17:57:25.733: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-wdrf' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:25.733: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5zpds' on 'bootstrap-e2e-minion-group-wdrf' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-28 17:48:56 +0000 UTC }] Jan 28 17:57:26.221: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:27.732: INFO: Pod "metadata-proxy-v0.1-5zpds": Phase="Running", Reason="", readiness=true. Elapsed: 22.09343497s Jan 28 17:57:27.732: INFO: Pod "metadata-proxy-v0.1-5zpds" satisfied condition "running and ready, or succeeded" Jan 28 17:57:27.732: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf": Phase="Running", Reason="", readiness=true. Elapsed: 22.093440188s Jan 28 17:57:27.732: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-wdrf" satisfied condition "running and ready, or succeeded" Jan 28 17:57:27.732: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-wdrf metadata-proxy-v0.1-5zpds] Jan 28 17:57:27.732: INFO: Reboot successful on node bootstrap-e2e-minion-group-wdrf Jan 28 17:57:28.264: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:30.312: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:32.355: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:34.399: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:36.443: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:38.486: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:40.529: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:42.573: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:44.618: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:46.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:48.710: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:50.756: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:52.800: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:54.843: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:56.886: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:57:58.940: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:00.983: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:03.027: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:05.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:07.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:09.162: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:11.205: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:13.249: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:15.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 17:58:17.333: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:19.374: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:21.415: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:23.455: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:25.494: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:27.535: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:29.575: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:31.615: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:33.656: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:35.697: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:37.736: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:39.776: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:41.816: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:43.856: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:45.896: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:47.936: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:49.977: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:52.017: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:54.057: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:56.097: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:58:58.137: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:00.178: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:02.218: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:04.257: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:06.297: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:08.337: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:10.378: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:12.418: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:19.758: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:21.798: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:23.838: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:25.878: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:27.919: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:29.960: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:32.001: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:34.041: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on (Spec Runtime: 5m0.298s) test/e2e/cloud/gcp/reboot.go:115 In [It] (Node Runtime: 5m0.001s) test/e2e/cloud/gcp/reboot.go:115 Spec Goroutine goroutine 1701 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc0013196b0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc002302800}, {0x8146f48?, 0xc001cf4340}, {0x7903d4e, 0x21e}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.6({0x7f6080dbf098?, 0xc002302800?}) test/e2e/cloud/gcp/reboot.go:133 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc002302800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 1703 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc002302800}, {0x8146f48, 0xc001cf4340}, {0xc000eaa680, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc002302800}, {0x8146f48, 0xc001cf4340}, {0x7ffeb48ed5ea, 0x3}, {0xc000eaa680, 0x1f}, {0x7903d4e, 0x21e}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 17:59:36.081: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:38.122: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:40.163: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:42.202: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:44.242: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:46.282: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:48.321: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:50.361: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:52.402: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on (Spec Runtime: 5m20.3s) test/e2e/cloud/gcp/reboot.go:115 In [It] (Node Runtime: 5m20.003s) test/e2e/cloud/gcp/reboot.go:115 Spec Goroutine goroutine 1701 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc0013196b0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc002302800}, {0x8146f48?, 0xc001cf4340}, {0x7903d4e, 0x21e}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.6({0x7f6080dbf098?, 0xc002302800?}) test/e2e/cloud/gcp/reboot.go:133 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc002302800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 1703 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc002302800}, {0x8146f48, 0xc001cf4340}, {0xc000eaa680, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc002302800}, {0x8146f48, 0xc001cf4340}, {0x7ffeb48ed5ea, 0x3}, {0xc000eaa680, 0x1f}, {0x7903d4e, 0x21e}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 17:59:54.442: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:56.481: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 17:59:58.521: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 18:00:00.562: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 18:00:02.602: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 18:00:04.643: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 18:00:12.152: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Jan 28 18:00:14.203: INFO: Condition Ready of node bootstrap-e2e-minion-group-hh49 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-28 17:55:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-28 17:55:25 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on (Spec Runtime: 5m40.301s) test/e2e/cloud/gcp/reboot.go:115 In [It] (Node Runtime: 5m40.004s) test/e2e/cloud/gcp/reboot.go:115 Spec Goroutine goroutine 1701 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc0013196b0?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7f6080dbf098?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7f6080dbf098?, 0xc002302800}, {0x8146f48?, 0xc001cf4340}, {0x7903d4e, 0x21e}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.6({0x7f6080dbf098?, 0xc002302800?}) test/e2e/cloud/gcp/reboot.go:133 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111d28?, 0xc002302800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 1703 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7f6080dbf098, 0xc002302800}, {0x8146f48, 0xc001cf4340}, {0xc000eaa680, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7f6080dbf098, 0xc002302800}, {0x8146f48, 0xc001cf4340}, {0x7ffeb48ed5ea, 0x3}, {0xc000eaa680, 0x1f}, {0x7903d4e, 0x21e}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x0) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 28 18:00:16.243: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 18:00:18.283: INFO: Couldn't get node bootstrap-e2e-minion-group-hh49 Jan 28 18:00:20.283: INFO: Node bootstrap-e2e-minion-group-hh49 didn't reach desired Ready condition status (true) within 5m0s Jan 28 18:00:20.283: INFO: Node bootstrap-e2e-minion-group-hh49 failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/28/23 18:00:20.283 < Exit [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/28/23 18:00:20.283 (5m45.983s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:00:20.283 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 18:00:20.284 Jan 28 18:00:20.323: INFO: Unexpected error: <*url.Error | 0xc0041ee030>: { Op: "Get", URL: "https://35.247.33.232/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc001cda3c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003b291d0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 247, 33, 232], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003790000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.247.33.232/api/v1/namespaces/kube-system/events": dial tcp 35.247.33.232:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 18:00:20.323 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:00:20.323 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:00:20.323 Jan 28 18:00:20.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:00:20.362 (39ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 18:00:20.363 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/28/23 18:00:20.363 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:00:20.363 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:00:20.363 STEP: Collecting events from namespace "reboot-9141". - test/e2e/framework/debug/dump.go:42 @ 01/28/23 18:00:20.363 Jan 28 18:00:20.402: INFO: Unexpected error: failed to list events in namespace "reboot-9141": <*url.Error | 0xc003b29200>: { Op: "Get", URL: "https://35.247.33.232/api/v1/namespaces/reboot-9141/events", Err: <*net.OpError | 0xc001452e10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0014ab470>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 247, 33, 232], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002f62420>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:00:20.403 (40ms) [FAILED] failed to list events in namespace "reboot-9141": Get "https://35.247.33.232/api/v1/namespaces/reboot-9141/events": dial tcp 35.247.33.232:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/28/23 18:00:20.403 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:00:20.403 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:00:20.403 STEP: Destroying namespace "reboot-9141" for this suite. - test/e2e/framework/framework.go:347 @ 01/28/23 18:00:20.403 [FAILED] Couldn't delete ns: "reboot-9141": Delete "https://35.247.33.232/api/v1/namespaces/reboot-9141": dial tcp 35.247.33.232:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.247.33.232/api/v1/namespaces/reboot-9141", Err:(*net.OpError)(0xc001cdaa50)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/28/23 18:00:20.442 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:00:20.442 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:00:20.443 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:00:20.443 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\striggering\skernel\spanic\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 18:00:50.722 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:00:20.6 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:00:20.6 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:00:20.6 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 18:00:20.601 Jan 28 18:00:20.601: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 18:00:20.602 Jan 28 18:00:20.641: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:22.681: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:24.681: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:26.684: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:28.682: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:30.682: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:32.681: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:34.681: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:36.683: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:38.685: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:40.682: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:42.682: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:44.682: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:46.682: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:48.683: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:50.683: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:50.722: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:50.722: INFO: Unexpected error: <*errors.errorString | 0xc000205c80>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 18:00:50.722 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:00:50.722 (30.122s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:00:50.722 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 18:00:50.722 Jan 28 18:00:50.762: INFO: Unexpected error: <*url.Error | 0xc003bfc000>: { Op: "Get", URL: "https://35.247.33.232/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc0016e6000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003bb5260>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 247, 33, 232], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0001c8060>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.247.33.232/api/v1/namespaces/kube-system/events": dial tcp 35.247.33.232:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 18:00:50.762 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:00:50.762 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:00:50.762 Jan 28 18:00:50.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:00:50.802 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:00:50.802 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:00:50.802 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:00:50.802 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:00:50.802 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:00:50.802 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:00:50.802 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:00:50.802 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:00:50.802 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\striggering\skernel\spanic\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 18:00:50.722 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:00:20.6 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/28/23 18:00:20.6 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:00:20.6 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/28/23 18:00:20.601 Jan 28 18:00:20.601: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/28/23 18:00:20.602 Jan 28 18:00:20.641: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:22.681: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:24.681: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:26.684: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:28.682: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:30.682: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:32.681: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:34.681: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:36.683: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:38.685: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:40.682: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:42.682: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:44.682: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:46.682: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:48.683: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:50.683: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:50.722: INFO: Unexpected error while creating namespace: Post "https://35.247.33.232/api/v1/namespaces": dial tcp 35.247.33.232:443: connect: connection refused Jan 28 18:00:50.722: INFO: Unexpected error: <*errors.errorString | 0xc000205c80>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/28/23 18:00:50.722 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/28/23 18:00:50.722 (30.122s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:00:50.722 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/28/23 18:00:50.722 Jan 28 18:00:50.762: INFO: Unexpected error: <*url.Error | 0xc003bfc000>: { Op: "Get", URL: "https://35.247.33.232/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc0016e6000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003bb5260>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 247, 33, 232], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0001c8060>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.247.33.232/api/v1/namespaces/kube-system/events": dial tcp 35.247.33.232:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/28/23 18:00:50.762 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/28/23 18:00:50.762 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:00:50.762 Jan 28 18:00:50.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/28/23 18:00:50.802 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:00:50.802 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:00:50.802 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/28/23 18:00:50.802 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/28/23 18:00:50.802 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:00:50.802 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/28/23 18:00:50.802 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:00:50.802 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/28/23 18:00:50.802 (0s)
Filter through log files
error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Feature:Reboot\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
from junit_runner.xml
Filter through log files
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e JUnit report
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e suite report
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e suite report
Kubernetes e2e suite [ReportBeforeSuite]
Kubernetes e2e suite [ReportBeforeSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest Extract
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest diffResources
kubetest kubectl version
kubetest list nodes
kubetest listResources After
kubetest listResources Before
kubetest listResources Down
kubetest listResources Up
kubetest test setup
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains a x-kubernetes-validations rule that refers to a property that do not exist
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains a x-kubernetes-validations rule that refers to a property that do not exist
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that contains a syntax error
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that contains a syntax error
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that exceeds the estimated cost limit
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that exceeds the estimated cost limit
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource that exceeds the runtime cost limit for x-kubernetes-validations rule execution
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource that exceeds the runtime cost limit for x-kubernetes-validations rule execution
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail update of a custom resource that does not satisfy a x-kubernetes-validations transition rule
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail update of a custom resource that does not satisfy a x-kubernetes-validations transition rule
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [It] [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply an invalid CR with extra properties for CRD with validation schema
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply an invalid CR with extra properties for CRD with validation schema
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect duplicates in a CR when preserving unknown fields
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect duplicates in a CR when preserving unknown fields
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown and duplicate fields of a typed object
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown and duplicate fields of a typed object
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply a finalizer to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply a finalizer to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should apply changes to a resourcequota status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should apply changes to a resourcequota status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [It] [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [It] [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [It] [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [It] [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [It] [sig-api-machinery] kube-apiserver identity [Feature:APIServerIdentity] kube-apiserver identity should persist after restart [Disruptive]
Kubernetes e2e suite [It] [sig-api-machinery] kube-apiserver identity [Feature:APIServerIdentity] kube-apiserver identity should persist after restart [Disruptive]
Kubernetes e2e suite [It] [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [It] [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]
Kubernetes e2e suite [It] [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [It] [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [It] [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [It] [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [It] [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [It] [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [It] [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support timezone
Kubernetes e2e suite [It] [sig-apps] CronJob should support timezone
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [It] [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [It] [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [It] [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [It] [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [It] [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [It] [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [It] [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [It] [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [It] [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore DisruptionTarget condition
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore DisruptionTarget condition
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore exit code 137
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore exit code 137
Kubernetes e2e suite [It] [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy on exit code to fail the job early
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy on exit code to fail the job early
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy to not count the failure towards the backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy to not count the failure towards the backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [It] [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [It] [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [It] [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [It] [sig-apps] Job should manage the lifecycle of a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should manage the lifecycle of a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [It] [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [It] [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [It] [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should get and update a ReplicationController scale [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should get and update a ReplicationController scale [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [It] [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [It] [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [It] [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [It] [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [It] [sig-auth] SelfSubjectReview [Feature:APISelfSubjectReview] should support SelfSubjectReview API operations
Kubernetes e2e suite [It] [sig-auth] SelfSubjectReview [Feature:APISelfSubjectReview] should support SelfSubjectReview API operations
Kubernetes e2e suite [It] [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should update a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should update a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] SubjectReview should support SubjectReview API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] SubjectReview should support SubjectReview API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) CustomResourceDefinition Should scale with a CRD targetRef
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) CustomResourceDefinition Should scale with a CRD targetRef
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light [Slow] Should scale from 2 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light [Slow] Should scale from 2 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods on a busy application with an idle sidecar container
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods on a busy application with an idle sidecar container
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale down
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale down
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale up
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale up
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range over two stabilization windows
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range over two stabilization windows
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range with stabilization window and pod limit rate
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range with stabilization window and pod limit rate
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with long upscale stabilization window should scale up only after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with long upscale stabilization window should scale up only after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale down no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale down no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale up no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale up no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale down no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale down no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale up no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale up no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down to 0
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down to 0
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down with Prometheus
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down with Prometheus
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target average value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target average value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Container Resource and External Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Container Resource and External Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Pod and Object Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Pod and Object Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Pod and External metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Pod and External metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Resource and Object metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Resource and Object metrics)
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl events should show event when pod is created
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl events should show event when pod is created
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply an invalid/valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply an invalid/valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run running a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run running a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run running a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run running a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes should handle in-cluster config
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes should handle in-cluster config
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes should support port-forward
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes should support port-forward
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [It] [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [It] [sig-cli] Kubectl logs default container logs the second container is the default-container by annotation should log default container if not specified
Kubernetes e2e suite [It] [sig-cli] Kubectl logs default container logs the second container is the default-container by annotation should log default container if not specified
Kubernetes e2e suite [It] [sig-cli] Kubectl logs logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl logs logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [It] [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [It] [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [It] [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should manage the lifecycle of an event [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should manage the lifecycle of an event [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [It] [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics slis from API server.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics slis from API server.
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [It] [sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal
Kubernetes e2e suite [It] [sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal
Kubernetes e2e suite [It] [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [It] [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [It] [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]
Kubernetes e2e suite [It] [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]
Kubernetes e2e suite [It] [sig-network] DNS HostNetwork should resolve DNS of partial qualified names for services on hostNetwork pods with dnsPolicy: ClusterFirstWithHostNet [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS HostNetwork should resolve DNS of partial qualified names for services on hostNetwork pods with dnsPolicy: ClusterFirstWithHostNet [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Hostname [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Hostname [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [It] [sig-network] DNS should work with the pod containing more than 6 DNS search paths and longer than 256 search list characters
Kubernetes e2e suite [It] [sig-network] DNS should work with the pod containing more than 6 DNS search paths and longer than 256 search list characters
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoint with multiple subsets and same IP address
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoint with multiple subsets and same IP address
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [It] [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [It] [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [It] [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [It] [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [It] [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [It] [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [It] [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should allow IngressClass to have Namespace-scoped parameters [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should allow IngressClass to have Namespace-scoped parameters [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should choose the one with the later CreationTimestamp, if equal the one with the lower name when two ingressClasses are marked as default[Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should choose the one with the later CreationTimestamp, if equal the one with the lower name when two ingressClasses are marked as default[Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
Kubernetes e2e suite [It] [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create LoadBalancer Service without NodePort and change it [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create LoadBalancer Service without NodePort and change it [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on different nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on different nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on the same nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on the same nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should not have connectivity disruption during rolling update with externalTrafficPolicy=Cluster [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should not have connectivity disruption during rolling update with externalTrafficPolicy=Cluster [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should not have connectivity disruption during rolling update with externalTrafficPolicy=Local [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should not have connectivity disruption during rolling update with externalTrafficPolicy=Local [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to create a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to create a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to switch between IG and NEG modes
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to switch between IG and NEG modes
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints to NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints to NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API with endport field
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API with endport field
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy with Status subresource [Feature:NetworkPolicyStatus]
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy with Status subresource [Feature:NetworkPolicyStatus]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Serial]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Serial]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [It] [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [It] [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [It] [sig-network] Networking should allow creating a Pod with an SCTP HostPort [LinuxOnly] [Serial]
Kubernetes e2e suite [It] [sig-network] Networking should allow creating a Pod with an SCTP HostPort [LinuxOnly] [Serial]
Kubernetes e2e suite [It] [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [It] [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [It] [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [It] [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [It] [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [It] [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [It] [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [It] [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [It] [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [It] [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [It] [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] Services should allow creating a basic SCTP service with pod and endpoints [LinuxOnly] [Serial]
Kubernetes e2e suite [It] [sig-network] Services should allow creating a basic SCTP service with pod and endpoints [LinuxOnly] [Serial]
Kubernetes e2e suite [It] [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [It] [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
Kubernetes e2e suite [It] [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
Kubernetes e2e suite [It] [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to up and down services
Kubernetes e2e suite [It] [sig-network] Services should be able to up and down services
Kubernetes e2e suite [It] [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [It] [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [It] [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [It] [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [It] [sig-network] Services should be rejected for evicted pods (no endpoints exist)
Kubernetes e2e suite [It] [sig-network] Services should be rejected for evicted pods (no endpoints exist)
Kubernetes e2e suite [It] [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [It] [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [It] [sig-network] Services should be updated after adding or deleting ports
Kubernetes e2e suite [It] [sig-network] Services should be updated after adding or deleting ports
Kubernetes e2e suite [It] [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [It] [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [It] [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [It] [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [It] [sig-network] Services should delete a collection of services [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should delete a collection of services [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should fail health check node port if there are only terminating endpoints [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fail health check node port if there are only terminating endpoints [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with externalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with externalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with externallTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with externallTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [It] [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
Kubernetes e2e suite [It] [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
Kubernetes e2e suite [It] [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [It] [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [It] [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [It] [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should serve endpoints on same port and different protocol for internal traffic on Type LoadBalancer
Kubernetes e2e suite [It] [sig-network] Services should serve endpoints on same port and different protocol for internal traffic on Type LoadBalancer
Kubernetes e2e suite [It] [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after the service has been recreated
Kubernetes e2e suite [It] [sig-network] Services should work after the service has been recreated
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [It] [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum