go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sServers\swith\ssupport\sfor\sAPI\schunking\sshould\ssupport\scontinue\slisting\sfrom\sthe\slast\skey\sif\sthe\soriginal\sversion\shas\sbeen\scompacted\saway\,\sthough\sthe\slist\sis\sinconsistent\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001240870) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-api-machinery] Servers with support for API chunking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:49:19.822 Nov 26 14:49:19.822: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename chunking 11/26/22 14:49:19.823 Nov 26 14:49:19.863: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:21.903: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:23.903: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:25.902: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:27.903: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:29.903: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:31.903: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:33.903: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:35.903: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:37.903: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:39.903: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:41.903: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:43.903: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:45.902: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:47.902: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:49.902: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:49.942: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:49.942: INFO: Unexpected error: <*errors.errorString | 0xc000207d70>: { s: "timed out waiting for the condition", } Nov 26 14:49:49.942: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001240870) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/node/init/init.go:32 Nov 26 14:49:49.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:49:49.981 [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\sjobs\swhen\ssuspended\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00088d860) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:43:38.115 Nov 26 14:43:38.115: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/26/22 14:43:38.118 Nov 26 14:43:38.157: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:40.198: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:42.198: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:44.197: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:46.197: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:48.197: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:50.197: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:52.198: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:54.197: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:56.197: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:58.197: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:00.197: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:02.197: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:04.197: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:06.198: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:08.198: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:08.237: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:08.237: INFO: Unexpected error: <*errors.errorString | 0xc0001c9910>: { s: "timed out waiting for the condition", } Nov 26 14:44:08.237: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00088d860) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 26 14:44:08.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:44:08.277 [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\snew\sjobs\swhen\sForbidConcurrent\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000ded860) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:41:53.161 Nov 26 14:41:53.161: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/26/22 14:41:53.163 Nov 26 14:41:53.202: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:41:55.242: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:41:57.243: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:41:59.243: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:01.243: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:03.242: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:05.243: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:07.243: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:09.242: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:11.242: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:13.242: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:15.243: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:17.243: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:19.242: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:21.242: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:23.242: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:23.282: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:23.282: INFO: Unexpected error: <*errors.errorString | 0xc0000d1db0>: { s: "timed out waiting for the condition", } Nov 26 14:42:23.282: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000ded860) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 26 14:42:23.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:42:23.322 [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sBurst\sscaling\sshould\srun\sto\scompletion\seven\swith\sunhealthy\spods\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000b301e0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:49:36.993 Nov 26 14:49:36.993: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/26/22 14:49:36.995 Nov 26 14:49:37.034: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:39.075: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:41.074: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:43.075: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:45.075: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:47.074: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:49.074: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:51.074: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:53.075: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:55.074: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:57.075: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:59.075: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:01.074: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:03.075: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:05.074: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:07.075: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:07.114: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:07.114: INFO: Unexpected error: <*errors.errorString | 0xc0001d3930>: { s: "timed out waiting for the condition", } Nov 26 14:50:07.114: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000b301e0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 26 14:50:07.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:50:07.153 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sScaling\sshould\shappen\sin\spredictable\sorder\sand\shalt\sif\sany\sstateful\spod\sis\sunhealthy\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0009221e0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [FAILED] Nov 26 14:53:00.444: failed to list events in namespace "statefulset-4403": Get "https://34.83.118.239/api/v1/namespaces/statefulset-4403/events": dial tcp 34.83.118.239:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 14:53:00.484: Couldn't delete ns: "statefulset-4403": Delete "https://34.83.118.239/api/v1/namespaces/statefulset-4403": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/statefulset-4403", Err:(*net.OpError)(0xc002cbce60)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:50:05.679 Nov 26 14:50:05.679: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/26/22 14:50:05.681 Nov 26 14:50:05.720: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:07.761: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:09.761: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:11.761: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:13.760: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:15.761: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:17.761: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:19.760: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:21.760: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:23.760: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:25.761: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:27.760: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:29.760: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:00.365: INFO: Unexpected error: <*fmt.wrapError | 0xc004588400>: { msg: "wait for service account \"default\" in namespace \"statefulset-4403\": timed out waiting for the condition", err: <*errors.errorString | 0xc0001fda10>{ s: "timed out waiting for the condition", }, } Nov 26 14:53:00.365: FAIL: wait for service account "default" in namespace "statefulset-4403": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0009221e0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 26 14:53:00.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:53:00.405 STEP: Collecting events from namespace "statefulset-4403". 11/26/22 14:53:00.405 Nov 26 14:53:00.444: INFO: Unexpected error: failed to list events in namespace "statefulset-4403": <*url.Error | 0xc001a362d0>: { Op: "Get", URL: "https://34.83.118.239/api/v1/namespaces/statefulset-4403/events", Err: <*net.OpError | 0xc001edf220>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0025084b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 118, 239], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00382a3e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 14:53:00.444: FAIL: failed to list events in namespace "statefulset-4403": Get "https://34.83.118.239/api/v1/namespaces/statefulset-4403/events": dial tcp 34.83.118.239:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc000ee65c0, {0xc0019b2dd0, 0x10}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc003a96d00}, {0xc0019b2dd0, 0x10}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc000ee6650?, {0xc0019b2dd0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0009221e0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0016222b0?, 0xc003c87fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0016222b0?, 0x0?}, {0xae73300?, 0x5?, 0xc0020c56f8?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 STEP: Destroying namespace "statefulset-4403" for this suite. 11/26/22 14:53:00.445 Nov 26 14:53:00.484: FAIL: Couldn't delete ns: "statefulset-4403": Delete "https://34.83.118.239/api/v1/namespaces/statefulset-4403": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/statefulset-4403", Err:(*net.OpError)(0xc002cbce60)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0009221e0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001622200?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001622200?, 0x7fc8c08?}, {0xae73300?, 0xc000b43860?, 0xc0003d6758?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-auth\]\sServiceAccounts\sshould\ssupport\sInClusterConfig\swith\stoken\srotation\s\[Slow\]$'
test/e2e/auth/service_accounts.go:520 k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:520 +0x9abfrom junit_01.xml
[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:43:26.343 Nov 26 14:43:26.344: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename svcaccounts 11/26/22 14:43:26.345 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 14:43:26.546 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 14:43:26.663 [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 [It] should support InClusterConfig with token rotation [Slow] test/e2e/auth/service_accounts.go:432 Nov 26 14:43:27.011: INFO: created pod Nov 26 14:43:27.011: INFO: Waiting up to 1m0s for 1 pods to be running and ready: [inclusterclient] Nov 26 14:43:27.011: INFO: Waiting up to 1m0s for pod "inclusterclient" in namespace "svcaccounts-6006" to be "running and ready" Nov 26 14:43:27.119: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 108.192999ms Nov 26 14:43:27.119: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on '' to be 'Running' but was 'Pending' Nov 26 14:43:29.257: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245938194s Nov 26 14:43:29.257: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:43:31.196: INFO: Pod "inclusterclient": Phase="Running", Reason="", readiness=true. Elapsed: 4.184807629s Nov 26 14:43:31.196: INFO: Pod "inclusterclient" satisfied condition "running and ready" Nov 26 14:43:31.196: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [inclusterclient] Nov 26 14:43:31.196: INFO: pod is ready Nov 26 14:44:31.197: INFO: polling logs Nov 26 14:44:31.236: INFO: Error pulling logs: Get "https://34.83.118.239/api/v1/namespaces/svcaccounts-6006/pods/inclusterclient/log?container=inclusterclient&previous=false": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:45:31.196: INFO: polling logs Nov 26 14:45:31.337: FAIL: Unexpected error: inclusterclient reported an error: saw status=failed I1126 14:43:28.389939 1 main.go:61] started I1126 14:43:58.394492 1 main.go:79] calling /healthz I1126 14:43:58.394706 1 main.go:96] authz_header=w8-DStbfiufIveVl7ba6BetgKMjzpqwCu_JzOESIyBM E1126 14:43:58.395619 1 main.go:82] status=failed E1126 14:43:58.395641 1 main.go:83] error checking /healthz: Get "https://10.0.0.1:443/healthz": dial tcp 10.0.0.1:443: connect: connection refused I1126 14:44:28.394128 1 main.go:79] calling /healthz I1126 14:44:28.394332 1 main.go:96] authz_header=w8-DStbfiufIveVl7ba6BetgKMjzpqwCu_JzOESIyBM E1126 14:44:28.395129 1 main.go:82] status=failed E1126 14:44:28.395148 1 main.go:83] error checking /healthz: Get "https://10.0.0.1:443/healthz": dial tcp 10.0.0.1:443: connect: connection refused I1126 14:44:58.391213 1 main.go:79] calling /healthz I1126 14:44:58.391450 1 main.go:96] authz_header=w8-DStbfiufIveVl7ba6BetgKMjzpqwCu_JzOESIyBM I1126 14:45:28.391389 1 main.go:79] calling /healthz I1126 14:45:28.391692 1 main.go:96] authz_header=w8-DStbfiufIveVl7ba6BetgKMjzpqwCu_JzOESIyBM Full Stack Trace k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:520 +0x9ab [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 Nov 26 14:45:31.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:45:31.42 STEP: Collecting events from namespace "svcaccounts-6006". 11/26/22 14:45:31.42 STEP: Found 4 events. 11/26/22 14:45:31.462 Nov 26 14:45:31.462: INFO: At 2022-11-26 14:43:27 +0000 UTC - event for inclusterclient: {default-scheduler } Scheduled: Successfully assigned svcaccounts-6006/inclusterclient to bootstrap-e2e-minion-group-90df Nov 26 14:45:31.462: INFO: At 2022-11-26 14:43:28 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-90df} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 14:45:31.462: INFO: At 2022-11-26 14:43:28 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-90df} Created: Created container inclusterclient Nov 26 14:45:31.462: INFO: At 2022-11-26 14:43:28 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-90df} Started: Started container inclusterclient Nov 26 14:45:31.502: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 14:45:31.502: INFO: inclusterclient bootstrap-e2e-minion-group-90df Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:27 +0000 UTC }] Nov 26 14:45:31.502: INFO: Nov 26 14:45:31.593: INFO: Logging node info for node bootstrap-e2e-master Nov 26 14:45:31.636: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 39f76886-2c7b-440b-9ef2-f11a2bfefeb1 3886 0 2022-11-26 14:37:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 14:37:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 14:42:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-09/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 14:37:20 +0000 UTC,LastTransitionTime:2022-11-26 14:37:20 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 14:42:41 +0000 UTC,LastTransitionTime:2022-11-26 14:37:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 14:42:41 +0000 UTC,LastTransitionTime:2022-11-26 14:37:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 14:42:41 +0000 UTC,LastTransitionTime:2022-11-26 14:37:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 14:42:41 +0000 UTC,LastTransitionTime:2022-11-26 14:37:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.118.239,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-09.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-09.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e3e070cfc8d4db9e880daaf5c4a65019,SystemUUID:e3e070cf-c8d4-db9e-880d-aaf5c4a65019,BootID:ff2901fa-ae19-4286-a08b-110e9e385f96,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 14:45:31.636: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 14:45:31.680: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 14:45:31.764: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:31.764: INFO: Container kube-scheduler ready: true, restart count 3 Nov 26 14:45:31.764: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:31.764: INFO: Container konnectivity-server-container ready: true, restart count 3 Nov 26 14:45:31.764: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 14:36:47 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:31.764: INFO: Container l7-lb-controller ready: true, restart count 5 Nov 26 14:45:31.764: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:31.764: INFO: Container etcd-container ready: true, restart count 1 Nov 26 14:45:31.764: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 14:36:47 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:31.764: INFO: Container kube-addon-manager ready: true, restart count 1 Nov 26 14:45:31.764: INFO: metadata-proxy-v0.1-b48tm started at 2022-11-26 14:37:16 +0000 UTC (0+2 container statuses recorded) Nov 26 14:45:31.764: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 14:45:31.764: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 14:45:31.764: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:31.764: INFO: Container kube-apiserver ready: true, restart count 2 Nov 26 14:45:31.764: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:31.764: INFO: Container kube-controller-manager ready: false, restart count 4 Nov 26 14:45:31.764: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:31.764: INFO: Container etcd-container ready: true, restart count 1 Nov 26 14:45:31.950: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 14:45:31.950: INFO: Logging node info for node bootstrap-e2e-minion-group-5c8w Nov 26 14:45:31.993: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-5c8w 9c1c2738-39d8-4fbd-8eb9-cd823476dc17 4624 0 2022-11-26 14:37:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-5c8w kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-5c8w topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-1233":"bootstrap-e2e-minion-group-5c8w","csi-hostpath-multivolume-6811":"bootstrap-e2e-minion-group-5c8w"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 14:37:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 14:40:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 14:42:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 14:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-09/us-west1-b/bootstrap-e2e-minion-group-5c8w,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:32 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:32 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 14:42:32 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 14:42:32 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 14:42:32 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 14:42:32 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:32 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 14:37:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:33 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 14:44:40 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 14:44:40 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 14:44:40 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 14:44:40 +0000 UTC,LastTransitionTime:2022-11-26 14:37:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.230.112.32,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-5c8w.c.k8s-boskos-gce-project-09.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-5c8w.c.k8s-boskos-gce-project-09.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f8110843d1932588ce84ecdf8f74c3c9,SystemUUID:f8110843-d193-2588-ce84-ecdf8f74c3c9,BootID:ebb46ff0-b8ec-405f-80a1-fbaa69879823,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-6811^3ec72831-6d98-11ed-92c7-5e18ea5c5386],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-6811^3ec72831-6d98-11ed-92c7-5e18ea5c5386,DevicePath:,},},Config:nil,},} Nov 26 14:45:31.994: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-5c8w Nov 26 14:45:32.040: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-5c8w Nov 26 14:45:32.175: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:40:09 +0000 UTC (0+7 container statuses recorded) Nov 26 14:45:32.175: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 14:45:32.175: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 14:45:32.175: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 14:45:32.175: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 14:45:32.175: INFO: Container hostpath ready: true, restart count 3 Nov 26 14:45:32.175: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 14:45:32.175: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 26 14:45:32.175: INFO: metadata-proxy-v0.1-xb4cm started at 2022-11-26 14:37:18 +0000 UTC (0+2 container statuses recorded) Nov 26 14:45:32.175: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 14:45:32.175: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 14:45:32.175: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-46mpd started at 2022-11-26 14:43:13 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:45:32.175: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-cswzg started at 2022-11-26 14:39:31 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container agnhost-container ready: false, restart count 5 Nov 26 14:45:32.175: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-c556l started at 2022-11-26 14:40:05 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 14:45:32.175: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-74vq9 started at 2022-11-26 14:43:30 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:45:32.175: INFO: external-provisioner-lt7f5 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container nfs-provisioner ready: true, restart count 1 Nov 26 14:45:32.175: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:43:16 +0000 UTC (0+7 container statuses recorded) Nov 26 14:45:32.175: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 14:45:32.175: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 14:45:32.175: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 14:45:32.175: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 14:45:32.175: INFO: Container hostpath ready: true, restart count 3 Nov 26 14:45:32.175: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 14:45:32.175: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 14:45:32.175: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-h5pvn started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:45:32.175: INFO: test-hostpath-type-7x592 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container host-path-testing ready: true, restart count 0 Nov 26 14:45:32.175: INFO: kube-proxy-bootstrap-e2e-minion-group-5c8w started at 2022-11-26 14:37:18 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container kube-proxy ready: true, restart count 4 Nov 26 14:45:32.175: INFO: pod-secrets-5fdd18ad-0588-44cf-82a3-528f3248be63 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 14:45:32.175: INFO: netserver-0 started at 2022-11-26 14:39:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container webserver ready: true, restart count 5 Nov 26 14:45:32.175: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-9s9ds started at 2022-11-26 14:39:43 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 14:45:32.175: INFO: pod-6be3caae-2380-4995-afed-16e4c49357fb started at 2022-11-26 14:39:54 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container write-pod ready: false, restart count 0 Nov 26 14:45:32.175: INFO: pod-subpath-test-preprovisionedpv-tdgq started at 2022-11-26 14:43:30 +0000 UTC (1+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Init container init-volume-preprovisionedpv-tdgq ready: true, restart count 0 Nov 26 14:45:32.175: INFO: Container test-container-subpath-preprovisionedpv-tdgq ready: false, restart count 0 Nov 26 14:45:32.175: INFO: pod-subpath-test-preprovisionedpv-cq4s started at 2022-11-26 14:43:30 +0000 UTC (1+2 container statuses recorded) Nov 26 14:45:32.175: INFO: Init container init-volume-preprovisionedpv-cq4s ready: true, restart count 0 Nov 26 14:45:32.175: INFO: Container test-container-subpath-preprovisionedpv-cq4s ready: true, restart count 0 Nov 26 14:45:32.175: INFO: Container test-container-volume-preprovisionedpv-cq4s ready: true, restart count 0 Nov 26 14:45:32.175: INFO: pod-9167b845-e5a4-4f53-8d7b-8d6705e552fb started at 2022-11-26 14:40:18 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container write-pod ready: false, restart count 0 Nov 26 14:45:32.175: INFO: pod-subpath-test-preprovisionedpv-v4q9 started at 2022-11-26 14:43:29 +0000 UTC (1+2 container statuses recorded) Nov 26 14:45:32.175: INFO: Init container init-volume-preprovisionedpv-v4q9 ready: true, restart count 0 Nov 26 14:45:32.175: INFO: Container test-container-subpath-preprovisionedpv-v4q9 ready: true, restart count 0 Nov 26 14:45:32.175: INFO: Container test-container-volume-preprovisionedpv-v4q9 ready: true, restart count 0 Nov 26 14:45:32.175: INFO: affinity-lb-transition-fvtxg started at 2022-11-26 14:43:15 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container affinity-lb-transition ready: true, restart count 2 Nov 26 14:45:32.175: INFO: pod-dd65e016-58f1-47f3-8003-951feb84af57 started at 2022-11-26 14:45:23 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container write-pod ready: true, restart count 0 Nov 26 14:45:32.175: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-wks5s started at 2022-11-26 14:40:17 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:45:32.175: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-jjgn4 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 14:45:32.175: INFO: pod-subpath-test-inlinevolume-n4pj started at 2022-11-26 14:43:33 +0000 UTC (1+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Init container init-volume-inlinevolume-n4pj ready: true, restart count 0 Nov 26 14:45:32.175: INFO: Container test-container-subpath-inlinevolume-n4pj ready: false, restart count 0 Nov 26 14:45:32.175: INFO: konnectivity-agent-cnxt9 started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container konnectivity-agent ready: false, restart count 5 Nov 26 14:45:32.175: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-478nt started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:45:32.175: INFO: pod-7dbe6380-f5d0-4852-b8bf-7231eca57b67 started at 2022-11-26 14:40:23 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container write-pod ready: false, restart count 0 Nov 26 14:45:32.175: INFO: mutability-test-fxk7p started at 2022-11-26 14:41:30 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container netexec ready: true, restart count 0 Nov 26 14:45:32.175: INFO: pod-subpath-test-preprovisionedpv-wqts started at 2022-11-26 14:43:30 +0000 UTC (1+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Init container init-volume-preprovisionedpv-wqts ready: true, restart count 0 Nov 26 14:45:32.175: INFO: Container test-container-subpath-preprovisionedpv-wqts ready: false, restart count 0 Nov 26 14:45:32.175: INFO: metrics-server-v0.5.2-867b8754b9-vrm2k started at 2022-11-26 14:38:16 +0000 UTC (0+2 container statuses recorded) Nov 26 14:45:32.175: INFO: Container metrics-server ready: false, restart count 4 Nov 26 14:45:32.175: INFO: Container metrics-server-nanny ready: false, restart count 4 Nov 26 14:45:32.175: INFO: failure-3 started at 2022-11-26 14:39:52 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.175: INFO: Container failure-3 ready: true, restart count 2 Nov 26 14:45:32.461: INFO: Latency metrics for node bootstrap-e2e-minion-group-5c8w Nov 26 14:45:32.461: INFO: Logging node info for node bootstrap-e2e-minion-group-90df Nov 26 14:45:32.502: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-90df 47ba7cef-c7a9-42dc-a972-e2581f5476da 4683 0 2022-11-26 14:37:19 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-90df kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-90df topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1468":"csi-mock-csi-mock-volumes-1468"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 14:37:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 14:39:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 14:42:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 14:45:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-09/us-west1-b/bootstrap-e2e-minion-group-90df,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:34 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 14:42:34 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 14:42:34 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 14:42:34 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 14:42:34 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:34 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:34 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 14:37:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:33 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 14:45:31 +0000 UTC,LastTransitionTime:2022-11-26 14:37:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 14:45:31 +0000 UTC,LastTransitionTime:2022-11-26 14:37:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 14:45:31 +0000 UTC,LastTransitionTime:2022-11-26 14:37:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 14:45:31 +0000 UTC,LastTransitionTime:2022-11-26 14:37:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.184.32,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-90df.c.k8s-boskos-gce-project-09.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-90df.c.k8s-boskos-gce-project-09.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:83ac01160a0c57758b5edb61ebb59ab4,SystemUUID:83ac0116-0a0c-5775-8b5e-db61ebb59ab4,BootID:8fc388ac-8473-4c3d-8b39-12dca64dff04,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 14:45:32.503: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-90df Nov 26 14:45:32.574: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-90df Nov 26 14:45:32.649: INFO: httpd started at 2022-11-26 14:41:18 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container httpd ready: true, restart count 5 Nov 26 14:45:32.650: INFO: hostexec-bootstrap-e2e-minion-group-90df-ql9k6 started at 2022-11-26 14:41:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 14:45:32.650: INFO: execpod-drops5lkl started at 2022-11-26 14:43:22 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 14:45:32.650: INFO: lb-sourcerange-9gzfn started at 2022-11-26 14:43:29 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container netexec ready: false, restart count 4 Nov 26 14:45:32.650: INFO: metadata-proxy-v0.1-ghfq8 started at 2022-11-26 14:37:20 +0000 UTC (0+2 container statuses recorded) Nov 26 14:45:32.650: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 14:45:32.650: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 14:45:32.650: INFO: hostexec-bootstrap-e2e-minion-group-90df-zwwz7 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:45:32.650: INFO: pod-82ed96e1-e0ff-4745-aac4-9f46213916e7 started at 2022-11-26 14:45:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container write-pod ready: true, restart count 0 Nov 26 14:45:32.650: INFO: l7-default-backend-8549d69d99-s4b5m started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 14:45:32.650: INFO: konnectivity-agent-8rxr7 started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container konnectivity-agent ready: true, restart count 4 Nov 26 14:45:32.650: INFO: csi-mockplugin-0 started at 2022-11-26 14:39:19 +0000 UTC (0+4 container statuses recorded) Nov 26 14:45:32.650: INFO: Container busybox ready: true, restart count 0 Nov 26 14:45:32.650: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 14:45:32.650: INFO: Container driver-registrar ready: true, restart count 0 Nov 26 14:45:32.650: INFO: Container mock ready: true, restart count 0 Nov 26 14:45:32.650: INFO: affinity-lb-transition-tgknb started at 2022-11-26 14:43:15 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container affinity-lb-transition ready: true, restart count 1 Nov 26 14:45:32.650: INFO: pod-subpath-test-preprovisionedpv-bhqw started at 2022-11-26 14:43:29 +0000 UTC (1+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Init container init-volume-preprovisionedpv-bhqw ready: true, restart count 0 Nov 26 14:45:32.650: INFO: Container test-container-subpath-preprovisionedpv-bhqw ready: false, restart count 0 Nov 26 14:45:32.650: INFO: kube-proxy-bootstrap-e2e-minion-group-90df started at 2022-11-26 14:37:19 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container kube-proxy ready: false, restart count 4 Nov 26 14:45:32.650: INFO: pod-subpath-test-inlinevolume-7q7t started at 2022-11-26 14:41:11 +0000 UTC (1+2 container statuses recorded) Nov 26 14:45:32.650: INFO: Init container init-volume-inlinevolume-7q7t ready: true, restart count 0 Nov 26 14:45:32.650: INFO: Container test-container-subpath-inlinevolume-7q7t ready: true, restart count 5 Nov 26 14:45:32.650: INFO: Container test-container-volume-inlinevolume-7q7t ready: false, restart count 4 Nov 26 14:45:32.650: INFO: netserver-1 started at 2022-11-26 14:39:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container webserver ready: false, restart count 4 Nov 26 14:45:32.650: INFO: hostexec-bootstrap-e2e-minion-group-90df-42ljh started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 14:45:32.650: INFO: execpod-accepttbp6q started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:45:32.650: INFO: kube-dns-autoscaler-5f6455f985-g8dtn started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container autoscaler ready: false, restart count 4 Nov 26 14:45:32.650: INFO: coredns-6d97d5ddb-thsmq started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container coredns ready: false, restart count 5 Nov 26 14:45:32.650: INFO: mutability-test-pdxr6 started at 2022-11-26 14:40:05 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container netexec ready: true, restart count 2 Nov 26 14:45:32.650: INFO: pod-secrets-ca2e8eea-812f-46de-9d25-74cfeb71e013 started at 2022-11-26 14:41:16 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 14:45:32.650: INFO: external-provisioner-7gjgf started at 2022-11-26 14:41:31 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container nfs-provisioner ready: true, restart count 1 Nov 26 14:45:32.650: INFO: inclusterclient started at 2022-11-26 14:43:27 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container inclusterclient ready: true, restart count 0 Nov 26 14:45:32.650: INFO: back-off-cap started at 2022-11-26 14:41:30 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container back-off-cap ready: false, restart count 5 Nov 26 14:45:32.650: INFO: failure-4 started at 2022-11-26 14:41:30 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container failure-4 ready: false, restart count 0 Nov 26 14:45:32.650: INFO: pod-subpath-test-preprovisionedpv-n6nd started at 2022-11-26 14:43:29 +0000 UTC (1+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Init container init-volume-preprovisionedpv-n6nd ready: true, restart count 0 Nov 26 14:45:32.650: INFO: Container test-container-subpath-preprovisionedpv-n6nd ready: false, restart count 0 Nov 26 14:45:32.650: INFO: hostexec-bootstrap-e2e-minion-group-90df-xp8d8 started at 2022-11-26 14:43:31 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container agnhost-container ready: false, restart count 2 Nov 26 14:45:32.650: INFO: volume-snapshot-controller-0 started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container volume-snapshot-controller ready: false, restart count 5 Nov 26 14:45:32.650: INFO: httpd started at 2022-11-26 14:39:18 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container httpd ready: true, restart count 1 Nov 26 14:45:32.650: INFO: pod-057c12b8-fcaa-47f7-b71f-aa3400ae7e4d started at 2022-11-26 14:41:44 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container write-pod ready: false, restart count 0 Nov 26 14:45:32.650: INFO: hostpath-symlink-prep-provisioning-7611 started at 2022-11-26 14:41:44 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container init-volume-provisioning-7611 ready: false, restart count 0 Nov 26 14:45:32.650: INFO: external-local-update-92js7 started at 2022-11-26 14:41:45 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:32.650: INFO: Container netexec ready: true, restart count 2 Nov 26 14:45:32.902: INFO: Latency metrics for node bootstrap-e2e-minion-group-90df Nov 26 14:45:32.902: INFO: Logging node info for node bootstrap-e2e-minion-group-r2mh Nov 26 14:45:32.945: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-r2mh 1a723982-de14-44dd-ba83-f2a219df5b69 4287 0 2022-11-26 14:37:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-r2mh kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-r2mh topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2338":"bootstrap-e2e-minion-group-r2mh","csi-hostpath-multivolume-3585":"bootstrap-e2e-minion-group-r2mh","csi-hostpath-provisioning-9596":"bootstrap-e2e-minion-group-r2mh"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 14:37:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 14:40:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 14:42:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 14:43:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-09/us-west1-b/bootstrap-e2e-minion-group-r2mh,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:20 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 14:37:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:33 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 14:40:52 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 14:40:52 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 14:40:52 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 14:40:52 +0000 UTC,LastTransitionTime:2022-11-26 14:37:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.230.108.57,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-r2mh.c.k8s-boskos-gce-project-09.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-r2mh.c.k8s-boskos-gce-project-09.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b02b78c4c4f55f886d1255b57a8f72a9,SystemUUID:b02b78c4-c4f5-5f88-6d12-55b57a8f72a9,BootID:89f45761-3bf9-44b7-ab35-4ef95f8fa75c,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-3585^2695f7b3-6d98-11ed-b37c-ae387406f163 kubernetes.io/csi/csi-hostpath-provisioning-9596^26926834-6d98-11ed-bd96-62e2a582563d],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-3585^2695f7b3-6d98-11ed-b37c-ae387406f163,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9596^26926834-6d98-11ed-bd96-62e2a582563d,DevicePath:,},},Config:nil,},} Nov 26 14:45:32.945: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-r2mh Nov 26 14:45:32.991: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-r2mh Nov 26 14:45:33.058: INFO: csi-mockplugin-0 started at 2022-11-26 14:40:16 +0000 UTC (0+4 container statuses recorded) Nov 26 14:45:33.058: INFO: Container busybox ready: false, restart count 3 Nov 26 14:45:33.058: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 14:45:33.058: INFO: Container driver-registrar ready: true, restart count 4 Nov 26 14:45:33.058: INFO: Container mock ready: true, restart count 4 Nov 26 14:45:33.058: INFO: kube-proxy-bootstrap-e2e-minion-group-r2mh started at 2022-11-26 14:37:17 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:33.058: INFO: Container kube-proxy ready: false, restart count 5 Nov 26 14:45:33.058: INFO: coredns-6d97d5ddb-wmgqj started at 2022-11-26 14:37:40 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:33.058: INFO: Container coredns ready: true, restart count 4 Nov 26 14:45:33.058: INFO: pod-configmaps-083a45f1-1cc7-4319-bec1-83b30373c023 started at 2022-11-26 14:39:17 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:33.058: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 14:45:33.058: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:39:30 +0000 UTC (0+7 container statuses recorded) Nov 26 14:45:33.058: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 14:45:33.058: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 14:45:33.058: INFO: Container csi-resizer ready: true, restart count 4 Nov 26 14:45:33.058: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 26 14:45:33.058: INFO: Container hostpath ready: true, restart count 4 Nov 26 14:45:33.058: INFO: Container liveness-probe ready: true, restart count 4 Nov 26 14:45:33.058: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 14:45:33.058: INFO: metadata-proxy-v0.1-66r9l started at 2022-11-26 14:37:18 +0000 UTC (0+2 container statuses recorded) Nov 26 14:45:33.058: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 14:45:33.058: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 14:45:33.058: INFO: konnectivity-agent-tb7mp started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:33.058: INFO: Container konnectivity-agent ready: true, restart count 3 Nov 26 14:45:33.058: INFO: pod-120effda-3138-4ce8-9b4f-08806b37e6a7 started at 2022-11-26 14:43:36 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:33.058: INFO: Container write-pod ready: false, restart count 0 Nov 26 14:45:33.058: INFO: netserver-2 started at 2022-11-26 14:39:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:33.058: INFO: Container webserver ready: true, restart count 0 Nov 26 14:45:33.058: INFO: pod-9f945ae2-b2e9-4784-8ff8-108d273c77c3 started at 2022-11-26 14:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:33.058: INFO: Container write-pod ready: false, restart count 0 Nov 26 14:45:33.058: INFO: hostexec-bootstrap-e2e-minion-group-r2mh-jx5n7 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:33.058: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:45:33.058: INFO: pod-subpath-test-dynamicpv-r9t6 started at 2022-11-26 14:39:37 +0000 UTC (1+1 container statuses recorded) Nov 26 14:45:33.058: INFO: Init container init-volume-dynamicpv-r9t6 ready: false, restart count 0 Nov 26 14:45:33.058: INFO: Container test-container-subpath-dynamicpv-r9t6 ready: false, restart count 0 Nov 26 14:45:33.058: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:40:10 +0000 UTC (0+7 container statuses recorded) Nov 26 14:45:33.058: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 14:45:33.058: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 14:45:33.058: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 14:45:33.058: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 14:45:33.058: INFO: Container hostpath ready: true, restart count 2 Nov 26 14:45:33.058: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 14:45:33.058: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 26 14:45:33.058: INFO: affinity-lb-transition-n2tc5 started at 2022-11-26 14:43:15 +0000 UTC (0+1 container statuses recorded) Nov 26 14:45:33.058: INFO: Container affinity-lb-transition ready: true, restart count 2 Nov 26 14:45:33.058: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:39:19 +0000 UTC (0+7 container statuses recorded) Nov 26 14:45:33.058: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 14:45:33.058: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 14:45:33.058: INFO: Container csi-resizer ready: true, restart count 4 Nov 26 14:45:33.058: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 26 14:45:33.058: INFO: Container hostpath ready: true, restart count 4 Nov 26 14:45:33.058: INFO: Container liveness-probe ready: true, restart count 4 Nov 26 14:45:33.058: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 14:45:33.328: INFO: Latency metrics for node bootstrap-e2e-minion-group-r2mh [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 STEP: Destroying namespace "svcaccounts-6006" for this suite. 11/26/22 14:45:33.328
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swith\s\-\-leave\-stdin\-open$'
test/e2e/kubectl/kubectl.go:589 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.7.7() test/e2e/kubectl/kubectl.go:589 +0x22d There were additional failures detected after the initial failure: [FAILED] Nov 26 14:42:06.206: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7557 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://34.83.118.239/api/v1/namespaces/kubectl-7557/pods/httpd": dial tcp 34.83.118.239:443: connect: connection refused error: exit status 1 In [AfterEach] at: test/e2e/framework/kubectl/builder.go:87 ---------- [FAILED] Nov 26 14:42:06.286: failed to list events in namespace "kubectl-7557": Get "https://34.83.118.239/api/v1/namespaces/kubectl-7557/events": dial tcp 34.83.118.239:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 14:42:06.326: Couldn't delete ns: "kubectl-7557": Delete "https://34.83.118.239/api/v1/namespaces/kubectl-7557": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/kubectl-7557", Err:(*net.OpError)(0xc005305360)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:41:17.812 Nov 26 14:41:17.812: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/26/22 14:41:17.813 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 14:41:17.96 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 14:41:18.047 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/26/22 14:41:18.138 Nov 26 14:41:18.138: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7557 create -f -' Nov 26 14:41:18.530: INFO: stderr: "" Nov 26 14:41:18.530: INFO: stdout: "pod/httpd created\n" Nov 26 14:41:18.530: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 26 14:41:18.530: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-7557" to be "running and ready" Nov 26 14:41:18.571: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 41.099388ms Nov 26 14:41:18.571: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:41:20.613: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.082833753s Nov 26 14:41:20.613: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC }] Nov 26 14:41:22.618: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.08796512s Nov 26 14:41:22.618: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC }] Nov 26 14:41:24.639: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.109214983s Nov 26 14:41:24.639: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC }] Nov 26 14:41:26.615: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.084388722s Nov 26 14:41:26.615: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC }] Nov 26 14:41:28.618: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.087760233s Nov 26 14:41:28.618: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:41:18 +0000 UTC }] Nov 26 14:41:30.623: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 12.092940476s Nov 26 14:41:30.623: INFO: Pod "httpd" satisfied condition "running and ready" Nov 26 14:41:30.623: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] [Slow] running a failing command with --leave-stdin-open test/e2e/kubectl/kubectl.go:585 Nov 26 14:41:30.623: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7557 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=Never --pod-running-timeout=2m0s failure-4 --leave-stdin-open -- /bin/sh -c exit 42' Nov 26 14:42:06.091: INFO: rc: 1 Nov 26 14:42:06.091: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc0015078c0>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7557 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=Never --pod-running-timeout=2m0s failure-4 --leave-stdin-open -- /bin/sh -c exit 42:\nCommand stdout:\n\nstderr:\nError from server: Get \"https://10.138.0.5:10250/containerLogs/kubectl-7557/failure-4/failure-4\": context deadline exceeded: connection error: desc = \"transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory\"\n\nerror:\nexit status 1", }, Code: 1, } Nov 26 14:42:06.091: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7557 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=Never --pod-running-timeout=2m0s failure-4 --leave-stdin-open -- /bin/sh -c exit 42: Command stdout: stderr: Error from server: Get "https://10.138.0.5:10250/containerLogs/kubectl-7557/failure-4/failure-4": context deadline exceeded: connection error: desc = "transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory" error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.7.7() test/e2e/kubectl/kubectl.go:589 +0x22d [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/26/22 14:42:06.091 Nov 26 14:42:06.091: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7557 delete --grace-period=0 --force -f -' Nov 26 14:42:06.206: INFO: rc: 1 Nov 26 14:42:06.206: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc00151aad0>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7557 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when deleting \"STDIN\": Delete \"https://34.83.118.239/api/v1/namespaces/kubectl-7557/pods/httpd\": dial tcp 34.83.118.239:443: connect: connection refused\n\nerror:\nexit status 1", }, Code: 1, } Nov 26 14:42:06.206: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7557 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://34.83.118.239/api/v1/namespaces/kubectl-7557/pods/httpd": dial tcp 34.83.118.239:443: connect: connection refused error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc0031b3ce0?, 0x0?}, {0xc00530f920, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc00530f920, 0xc}, {0xc002060580, 0x145}, {0xc000565ec0?, 0x8?, 0x7fae4d61b1d8?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc002060580, 0x145}, {0xc00530f920, 0xc}, {0xc001507cb0, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 26 14:42:06.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:42:06.246 STEP: Collecting events from namespace "kubectl-7557". 11/26/22 14:42:06.246 Nov 26 14:42:06.286: INFO: Unexpected error: failed to list events in namespace "kubectl-7557": <*url.Error | 0xc0032a0150>: { Op: "Get", URL: "https://34.83.118.239/api/v1/namespaces/kubectl-7557/events", Err: <*net.OpError | 0xc0032aa000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0031e2ba0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 118, 239], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00148d7c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 14:42:06.286: FAIL: failed to list events in namespace "kubectl-7557": Get "https://34.83.118.239/api/v1/namespaces/kubectl-7557/events": dial tcp 34.83.118.239:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc003a8e5c0, {0xc00530f920, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0009196c0}, {0xc00530f920, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc003a8e650?, {0xc00530f920?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000cdc2d0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001470d30?, 0xc003a9efb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc001bdaf28?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001470d30?, 0x29449fc?}, {0xae73300?, 0xc003a9ef80?, 0xc004cf6480?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-7557" for this suite. 11/26/22 14:42:06.286 Nov 26 14:42:06.326: FAIL: Couldn't delete ns: "kubectl-7557": Delete "https://34.83.118.239/api/v1/namespaces/kubectl-7557": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/kubectl-7557", Err:(*net.OpError)(0xc005305360)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000cdc2d0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001470c20?, 0xc003a9efb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc001bdaf28?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001470c20?, 0x29449fc?}, {0xae73300?, 0xc003a9ef80?, 0xc004cf6480?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never$'
test/e2e/framework/kubectl/builder.go:87 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc00283c000?, 0x0?}, {0xc000ee4ec0, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc000ee4ec0, 0xc}, {0xc00222fce0, 0x145}, {0xc000a75ec0?, 0x8?, 0x7fe09f925108?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc00222fce0, 0x145}, {0xc000ee4ec0, 0xc}, {0xc001118140, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 There were additional failures detected after the initial failure: [FAILED] Nov 26 14:50:20.934: failed to list events in namespace "kubectl-7156": Get "https://34.83.118.239/api/v1/namespaces/kubectl-7156/events": dial tcp 34.83.118.239:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 14:50:20.974: Couldn't delete ns: "kubectl-7156": Delete "https://34.83.118.239/api/v1/namespaces/kubectl-7156": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/kubectl-7156", Err:(*net.OpError)(0xc000048050)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:47:36.732 Nov 26 14:47:36.733: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/26/22 14:47:36.735 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 14:47:36.969 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 14:47:37.065 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/26/22 14:47:37.156 Nov 26 14:47:37.157: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7156 create -f -' Nov 26 14:47:37.832: INFO: stderr: "" Nov 26 14:47:37.832: INFO: stdout: "pod/httpd created\n" Nov 26 14:47:37.832: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 26 14:47:37.832: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-7156" to be "running and ready" Nov 26 14:47:37.931: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 99.294381ms Nov 26 14:47:37.931: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-5c8w' to be 'Running' but was 'Pending' Nov 26 14:47:40.007: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175441353s Nov 26 14:47:40.007: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-5c8w' to be 'Running' but was 'Pending' Nov 26 14:47:42.018: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185908735s Nov 26 14:47:42.018: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-5c8w' to be 'Running' but was 'Pending' Nov 26 14:47:44.027: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194877821s Nov 26 14:47:44.027: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-5c8w' to be 'Running' but was 'Pending' Nov 26 14:47:46.026: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194365055s Nov 26 14:47:46.026: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-5c8w' to be 'Running' but was 'Pending' Nov 26 14:47:48.030: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.197806263s Nov 26 14:47:48.030: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-5c8w' to be 'Running' but was 'Pending' Nov 26 14:47:49.984: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.15220548s Nov 26 14:47:49.984: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-5c8w' to be 'Running' but was 'Pending' Nov 26 14:47:52.006: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.173797316s Nov 26 14:47:52.006: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-5c8w' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC }] Nov 26 14:47:53.994: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.161594652s Nov 26 14:47:53.994: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-5c8w' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC }] Nov 26 14:47:55.984: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 18.151917269s Nov 26 14:47:55.984: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-5c8w' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC }] Nov 26 14:47:58.002: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 20.170427872s Nov 26 14:47:58.002: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-5c8w' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC }] Nov 26 14:47:59.982: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 22.149742269s Nov 26 14:47:59.982: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-5c8w' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC }] Nov 26 14:48:01.994: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 24.161829151s Nov 26 14:48:01.994: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-5c8w' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC }] Nov 26 14:48:04.037: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 26.205285521s Nov 26 14:48:04.037: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-5c8w' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:47:37 +0000 UTC }] Nov 26 14:48:05.987: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 28.155478718s Nov 26 14:48:05.988: INFO: Pod "httpd" satisfied condition "running and ready" Nov 26 14:48:05.988: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] [Slow] running a failing command without --restart=Never test/e2e/kubectl/kubectl.go:558 Nov 26 14:48:05.988: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7156 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=OnFailure --pod-running-timeout=2m0s failure-2 -- /bin/sh -c cat && exit 42' Nov 26 14:50:20.746: INFO: rc: 1 [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/26/22 14:50:20.746 Nov 26 14:50:20.747: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7156 delete --grace-period=0 --force -f -' Nov 26 14:50:20.854: INFO: rc: 1 Nov 26 14:50:20.854: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc001118420>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7156 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when deleting \"STDIN\": Delete \"https://34.83.118.239/api/v1/namespaces/kubectl-7156/pods/httpd\": dial tcp 34.83.118.239:443: connect: connection refused\n\nerror:\nexit status 1", }, Code: 1, } Nov 26 14:50:20.855: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7156 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://34.83.118.239/api/v1/namespaces/kubectl-7156/pods/httpd": dial tcp 34.83.118.239:443: connect: connection refused error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc00283c000?, 0x0?}, {0xc000ee4ec0, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc000ee4ec0, 0xc}, {0xc00222fce0, 0x145}, {0xc000a75ec0?, 0x8?, 0x7fe09f925108?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc00222fce0, 0x145}, {0xc000ee4ec0, 0xc}, {0xc001118140, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 26 14:50:20.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:50:20.894 STEP: Collecting events from namespace "kubectl-7156". 11/26/22 14:50:20.895 Nov 26 14:50:20.934: INFO: Unexpected error: failed to list events in namespace "kubectl-7156": <*url.Error | 0xc002eec000>: { Op: "Get", URL: "https://34.83.118.239/api/v1/namespaces/kubectl-7156/events", Err: <*net.OpError | 0xc0026954a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001adc780>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 118, 239], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000b14000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 14:50:20.934: FAIL: failed to list events in namespace "kubectl-7156": Get "https://34.83.118.239/api/v1/namespaces/kubectl-7156/events": dial tcp 34.83.118.239:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00003e5c0, {0xc000ee4ec0, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0026dd520}, {0xc000ee4ec0, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00003e650?, {0xc000ee4ec0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000f362d0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0011da500?, 0xc003c6ffb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc003282be8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0011da500?, 0x29449fc?}, {0xae73300?, 0xc003c6ff80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-7156" for this suite. 11/26/22 14:50:20.935 Nov 26 14:50:20.974: FAIL: Couldn't delete ns: "kubectl-7156": Delete "https://34.83.118.239/api/v1/namespaces/kubectl-7156": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/kubectl-7156", Err:(*net.OpError)(0xc000048050)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000f362d0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0011da430?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0011da430?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never\,\sbut\swith\s\-\-rm$'
test/e2e/framework/kubectl/builder.go:87 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc000b929a0?, 0x0?}, {0xc003a18010, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc003a18010, 0xc}, {0xc00346e000, 0x145}, {0xc000bbbec0?, 0x8?, 0x7f5b390725b8?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc00346e000, 0x145}, {0xc003a18010, 0xc}, {0xc0017963a0, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 There were additional failures detected after the initial failure: [FAILED] Nov 26 14:42:02.030: failed to list events in namespace "kubectl-7416": Get "https://34.83.118.239/api/v1/namespaces/kubectl-7416/events": dial tcp 34.83.118.239:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 14:42:02.069: Couldn't delete ns: "kubectl-7416": Delete "https://34.83.118.239/api/v1/namespaces/kubectl-7416": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/kubectl-7416", Err:(*net.OpError)(0xc0037442d0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:39:17.464 Nov 26 14:39:17.465: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/26/22 14:39:17.473 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 14:39:17.839 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 14:39:17.921 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/26/22 14:39:18.002 Nov 26 14:39:18.002: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7416 create -f -' Nov 26 14:39:18.515: INFO: stderr: "" Nov 26 14:39:18.515: INFO: stdout: "pod/httpd created\n" Nov 26 14:39:18.515: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 26 14:39:18.515: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-7416" to be "running and ready" Nov 26 14:39:18.564: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 48.734422ms Nov 26 14:39:18.564: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:39:20.606: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090830366s Nov 26 14:39:20.606: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:39:22.636: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120743829s Nov 26 14:39:22.636: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:39:24.605: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090331838s Nov 26 14:39:24.605: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:39:26.605: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090211741s Nov 26 14:39:26.605: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:39:28.605: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.090488704s Nov 26 14:39:28.605: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:39:30.606: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.090929788s Nov 26 14:39:30.606: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC }] Nov 26 14:39:32.619: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.104185957s Nov 26 14:39:32.619: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC }] Nov 26 14:39:34.605: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.090339846s Nov 26 14:39:34.605: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC }] Nov 26 14:39:36.609: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 18.093732632s Nov 26 14:39:36.609: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC }] Nov 26 14:39:38.642: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 20.126743108s Nov 26 14:39:38.642: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC }] Nov 26 14:39:40.639: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 22.124616892s Nov 26 14:39:40.639: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC }] Nov 26 14:39:42.608: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 24.093641837s Nov 26 14:39:42.608: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC }] Nov 26 14:39:44.605: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 26.090612983s Nov 26 14:39:44.605: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC }] Nov 26 14:39:46.606: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 28.091404373s Nov 26 14:39:46.606: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC }] Nov 26 14:39:48.606: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 30.091133529s Nov 26 14:39:48.606: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC }] Nov 26 14:39:50.606: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 32.091447523s Nov 26 14:39:50.606: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:41 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:39:18 +0000 UTC }] Nov 26 14:39:52.610: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 34.094789846s Nov 26 14:39:52.610: INFO: Pod "httpd" satisfied condition "running and ready" Nov 26 14:39:52.610: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] [Slow] running a failing command without --restart=Never, but with --rm test/e2e/kubectl/kubectl.go:571 Nov 26 14:39:52.610: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7416 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=OnFailure --rm --pod-running-timeout=2m0s failure-3 -- /bin/sh -c cat && exit 42' Nov 26 14:42:01.799: INFO: rc: 1 Nov 26 14:42:01.799: INFO: Waiting for pod failure-3 to disappear Nov 26 14:42:01.839: INFO: Encountered non-retryable error while listing pods: Get "https://34.83.118.239/api/v1/namespaces/kubectl-7416/pods": dial tcp 34.83.118.239:443: connect: connection refused [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/26/22 14:42:01.839 Nov 26 14:42:01.839: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7416 delete --grace-period=0 --force -f -' Nov 26 14:42:01.950: INFO: rc: 1 Nov 26 14:42:01.950: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc000e2e010>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7416 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when deleting \"STDIN\": Delete \"https://34.83.118.239/api/v1/namespaces/kubectl-7416/pods/httpd\": dial tcp 34.83.118.239:443: connect: connection refused\n\nerror:\nexit status 1", }, Code: 1, } Nov 26 14:42:01.950: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7416 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://34.83.118.239/api/v1/namespaces/kubectl-7416/pods/httpd": dial tcp 34.83.118.239:443: connect: connection refused error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc000b929a0?, 0x0?}, {0xc003a18010, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc003a18010, 0xc}, {0xc00346e000, 0x145}, {0xc000bbbec0?, 0x8?, 0x7f5b390725b8?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc00346e000, 0x145}, {0xc003a18010, 0xc}, {0xc0017963a0, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 26 14:42:01.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:42:01.99 STEP: Collecting events from namespace "kubectl-7416". 11/26/22 14:42:01.99 Nov 26 14:42:02.029: INFO: Unexpected error: failed to list events in namespace "kubectl-7416": <*url.Error | 0xc00393c300>: { Op: "Get", URL: "https://34.83.118.239/api/v1/namespaces/kubectl-7416/events", Err: <*net.OpError | 0xc001744a50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003536780>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 118, 239], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003a1e1c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 14:42:02.030: FAIL: failed to list events in namespace "kubectl-7416": Get "https://34.83.118.239/api/v1/namespaces/kubectl-7416/events": dial tcp 34.83.118.239:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00065e5c0, {0xc003a18010, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc00470c000}, {0xc003a18010, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00065e650?, {0xc003a18010?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000f182d0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0013ce540?, 0xc003b01fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc003a7ea48?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0013ce540?, 0x29449fc?}, {0xae73300?, 0xc003b01f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-7416" for this suite. 11/26/22 14:42:02.03 Nov 26 14:42:02.069: FAIL: Couldn't delete ns: "kubectl-7416": Delete "https://34.83.118.239/api/v1/namespaces/kubectl-7416": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/kubectl-7416", Err:(*net.OpError)(0xc0037442d0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000f182d0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0013ce430?, 0xc00157efb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0013ce430?, 0x0?}, {0xae73300?, 0x5?, 0xc0034e60a8?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sAddon\supdate\sshould\spropagate\sadd\-on\sfile\schanges\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000e93d10) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [FAILED] Nov 26 14:53:07.065: failed to list events in namespace "addon-update-test-775": Get "https://34.83.118.239/api/v1/namespaces/addon-update-test-775/events": dial tcp 34.83.118.239:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 14:53:07.105: Couldn't delete ns: "addon-update-test-775": Delete "https://34.83.118.239/api/v1/namespaces/addon-update-test-775": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/addon-update-test-775", Err:(*net.OpError)(0xc001e76870)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-cloud-provider-gcp] Addon update set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:50:20.998 Nov 26 14:50:20.998: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename addon-update-test 11/26/22 14:50:20.999 Nov 26 14:50:21.039: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:23.079: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:25.080: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:27.079: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:29.079: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:06.986: INFO: Unexpected error: <*fmt.wrapError | 0xc000aee3e0>: { msg: "wait for service account \"default\" in namespace \"addon-update-test-775\": timed out waiting for the condition", err: <*errors.errorString | 0xc0000d1db0>{ s: "timed out waiting for the condition", }, } Nov 26 14:53:06.986: FAIL: wait for service account "default" in namespace "addon-update-test-775": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000e93d10) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-cloud-provider-gcp] Addon update test/e2e/framework/node/init/init.go:32 Nov 26 14:53:06.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-cloud-provider-gcp] Addon update test/e2e/cloud/gcp/addon_update.go:237 [DeferCleanup (Each)] [sig-cloud-provider-gcp] Addon update dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:53:07.026 STEP: Collecting events from namespace "addon-update-test-775". 11/26/22 14:53:07.026 Nov 26 14:53:07.065: INFO: Unexpected error: failed to list events in namespace "addon-update-test-775": <*url.Error | 0xc003486600>: { Op: "Get", URL: "https://34.83.118.239/api/v1/namespaces/addon-update-test-775/events", Err: <*net.OpError | 0xc0030154a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0034865d0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 118, 239], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000208f60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 14:53:07.065: FAIL: failed to list events in namespace "addon-update-test-775": Get "https://34.83.118.239/api/v1/namespaces/addon-update-test-775/events": dial tcp 34.83.118.239:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00003a5c0, {0xc000e40cd8, 0x15}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc004085860}, {0xc000e40cd8, 0x15}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00003a650?, {0xc000e40cd8?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000e93d10) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001142690?, 0xc003ce3fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc0019cc3c8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001142690?, 0x29449fc?}, {0xae73300?, 0xc003ce3f80?, 0xc003ce3f70?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-cloud-provider-gcp] Addon update tear down framework | framework.go:193 STEP: Destroying namespace "addon-update-test-775" for this suite. 11/26/22 14:53:07.066 Nov 26 14:53:07.105: FAIL: Couldn't delete ns: "addon-update-test-775": Delete "https://34.83.118.239/api/v1/namespaces/addon-update-test-775": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/addon-update-test-775", Err:(*net.OpError)(0xc001e76870)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000e93d10) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001142570?, 0xc001baafb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001142570?, 0x0?}, {0xae73300?, 0x5?, 0xc001b7c348?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\shandle\supdates\sto\sExternalTrafficPolicy\sfield$'
test/e2e/network/loadbalancer.go:1513 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1513 +0x2bf There were additional failures detected after the initial failure: [FAILED] Nov 26 14:41:52.655: failed to list events in namespace "esipp-239": Get "https://34.83.118.239/api/v1/namespaces/esipp-239/events": dial tcp 34.83.118.239:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 14:41:52.695: Couldn't delete ns: "esipp-239": Delete "https://34.83.118.239/api/v1/namespaces/esipp-239": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/esipp-239", Err:(*net.OpError)(0xc003cbb1d0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:40:12.192 Nov 26 14:40:12.192: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 14:40:12.193 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 14:40:12.335 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 14:40:12.513 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should handle updates to ExternalTrafficPolicy field test/e2e/network/loadbalancer.go:1480 STEP: creating a service esipp-239/external-local-update with type=LoadBalancer 11/26/22 14:40:12.847 STEP: setting ExternalTrafficPolicy=Local 11/26/22 14:40:12.847 STEP: waiting for loadbalancer for service esipp-239/external-local-update 11/26/22 14:40:12.982 Nov 26 14:40:12.983: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer STEP: creating a pod to be part of the service external-local-update 11/26/22 14:41:45.07 Nov 26 14:41:45.119: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 14:41:45.174: INFO: Found all 1 pods Nov 26 14:41:45.174: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-update-92js7] Nov 26 14:41:45.174: INFO: Waiting up to 2m0s for pod "external-local-update-92js7" in namespace "esipp-239" to be "running and ready" Nov 26 14:41:45.216: INFO: Pod "external-local-update-92js7": Phase="Pending", Reason="", readiness=false. Elapsed: 42.499662ms Nov 26 14:41:45.216: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-92js7' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:41:47.270: INFO: Pod "external-local-update-92js7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096750388s Nov 26 14:41:47.270: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-92js7' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:41:49.266: INFO: Pod "external-local-update-92js7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092238305s Nov 26 14:41:49.266: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-92js7' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:41:51.259: INFO: Pod "external-local-update-92js7": Phase="Running", Reason="", readiness=true. Elapsed: 6.08494821s Nov 26 14:41:51.259: INFO: Pod "external-local-update-92js7" satisfied condition "running and ready" Nov 26 14:41:51.259: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-update-92js7] STEP: waiting for loadbalancer for service esipp-239/external-local-update 11/26/22 14:41:51.259 Nov 26 14:41:51.259: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer STEP: turning ESIPP off 11/26/22 14:41:51.299 Nov 26 14:41:52.427: INFO: Unexpected error: <*errors.errorString | 0xc000f6a6a0>: { s: "get endpoints for service esipp-239/external-local-update failed (Get \"https://34.83.118.239/api/v1/namespaces/esipp-239/endpoints/external-local-update\": dial tcp 34.83.118.239:443: connect: connection refused)", } Nov 26 14:41:52.427: FAIL: get endpoints for service esipp-239/external-local-update failed (Get "https://34.83.118.239/api/v1/namespaces/esipp-239/endpoints/external-local-update": dial tcp 34.83.118.239:443: connect: connection refused) Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1513 +0x2bf Nov 26 14:41:52.466: INFO: Unexpected error: <*errors.errorString | 0xc0017cf710>: { s: "failed to get Service \"external-local-update\": Get \"https://34.83.118.239/api/v1/namespaces/esipp-239/services/external-local-update\": dial tcp 34.83.118.239:443: connect: connection refused", } Nov 26 14:41:52.466: FAIL: failed to get Service "external-local-update": Get "https://34.83.118.239/api/v1/namespaces/esipp-239/services/external-local-update": dial tcp 34.83.118.239:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.7.1() test/e2e/network/loadbalancer.go:1495 +0xae panic({0x70eb7e0, 0xc000d4ce70}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00517a000, 0xd3}, {0xc0051f76b8?, 0xc00517a000?, 0xc0051f76e0?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3ee0, 0xc000f6a6a0}, {0x0?, 0x7607921?, 0x15?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1513 +0x2bf [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 14:41:52.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 14:41:52.506: INFO: Output of kubectl describe svc: Nov 26 14:41:52.506: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=esipp-239 describe svc --namespace=esipp-239' Nov 26 14:41:52.615: INFO: rc: 1 Nov 26 14:41:52.615: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:41:52.615 STEP: Collecting events from namespace "esipp-239". 11/26/22 14:41:52.615 Nov 26 14:41:52.655: INFO: Unexpected error: failed to list events in namespace "esipp-239": <*url.Error | 0xc000710300>: { Op: "Get", URL: "https://34.83.118.239/api/v1/namespaces/esipp-239/events", Err: <*net.OpError | 0xc00515a280>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0039ea360>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 118, 239], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0037f8040>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 14:41:52.655: FAIL: failed to list events in namespace "esipp-239": Get "https://34.83.118.239/api/v1/namespaces/esipp-239/events": dial tcp 34.83.118.239:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0051f65c0, {0xc0039477a0, 0x9}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc005084680}, {0xc0039477a0, 0x9}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0051f6650?, {0xc0039477a0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00128c000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000f981f0?, 0xc0000c8f50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc0000c8f40?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000f981f0?, 0x2622c40?}, {0xae73300?, 0xc0000c8f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-239" for this suite. 11/26/22 14:41:52.655 Nov 26 14:41:52.695: FAIL: Couldn't delete ns: "esipp-239": Delete "https://34.83.118.239/api/v1/namespaces/esipp-239": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/esipp-239", Err:(*net.OpError)(0xc003cbb1d0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00128c000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000f98140?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000f98140?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\sonly\starget\snodes\swith\sendpoints$'
test/e2e/network/loadbalancer.go:1363 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1363 +0x130 There were additional failures detected after the initial failure: [FAILED] Nov 26 15:01:38.565: failed to list events in namespace "esipp-5881": Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/events": dial tcp 34.83.118.239:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 15:01:38.605: Couldn't delete ns: "esipp-5881": Delete "https://34.83.118.239/api/v1/namespaces/esipp-5881": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/esipp-5881", Err:(*net.OpError)(0xc003fd95e0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:44:07.876 Nov 26 14:44:07.877: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 14:44:07.878 Nov 26 14:44:07.917: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:09.957: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:11.957: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:13.957: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:15.957: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:17.957: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:19.957: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:21.957: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:23.957: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:25.957: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:27.956: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:29.957: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:31.957: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:33.956: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:35.957: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 14:46:37.429 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 14:46:37.512 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should only target nodes with endpoints test/e2e/network/loadbalancer.go:1346 STEP: creating a service esipp-5881/external-local-nodes with type=LoadBalancer 11/26/22 14:46:37.932 STEP: setting ExternalTrafficPolicy=Local 11/26/22 14:46:37.932 STEP: waiting for loadbalancer for service esipp-5881/external-local-nodes 11/26/22 14:46:38.21 Nov 26 14:46:38.210: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer Nov 26 14:48:34.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:36.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:38.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:40.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:42.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:44.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:46.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:48.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:50.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:52.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:54.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:56.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:58.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:00.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:02.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:04.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:06.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:08.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:10.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:12.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:14.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:16.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:18.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:20.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:22.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:24.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:26.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:28.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:30.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:32.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:34.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:36.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:38.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:40.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:42.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:44.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:46.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:48.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:50.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:52.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:54.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:56.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:58.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:00.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:02.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:04.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:06.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:08.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:10.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:12.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:14.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:16.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:18.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:20.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:22.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:24.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:26.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:28.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:36.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:38.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:40.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:42.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:44.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:46.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:48.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:50.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:52.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:54.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:56.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:58.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:00.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:02.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:04.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:06.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:08.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:10.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:12.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:14.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:16.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:18.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:20.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:22.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:24.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:26.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:28.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:30.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:32.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:34.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:36.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 7m29.996s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m0s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 4m59.663s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:51:38.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:40.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:42.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:44.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:46.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:48.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:50.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:52.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:54.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:56.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 7m49.998s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m20.003s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 5m19.665s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:51:58.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:00.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:02.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:04.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:06.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:08.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:10.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:12.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:14.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:16.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 8m10.001s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m40.005s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 5m39.667s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:52:18.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:20.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:22.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:24.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:26.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:28.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:30.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:32.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:34.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:36.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 8m30.003s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 6m0.007s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 5m59.67s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:52:38.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:40.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:42.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:44.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:46.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:48.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:50.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:52.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:54.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:56.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 8m50.005s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 6m20.01s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 6m19.672s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:52:58.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:00.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:02.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:04.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:06.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:08.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:10.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:12.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:14.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:16.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 9m10.007s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 6m40.012s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 6m39.674s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:53:18.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:20.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:22.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:24.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:26.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:28.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:30.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:32.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:34.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:36.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 9m30.009s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 7m0.013s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 6m59.676s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:53:38.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:40.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:42.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:44.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:46.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:48.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:50.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:52.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:54.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:56.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 9m50.011s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 7m20.015s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 7m19.678s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:53:58.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 10m10.014s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 7m40.018s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 7m39.68s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 10m30.016s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 8m0.02s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 7m59.683s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 10m50.018s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 8m20.022s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 8m19.685s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 11m10.02s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 8m40.024s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 8m39.686s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 11m30.022s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 9m0.026s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 8m59.689s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 11m50.025s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 9m20.029s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 9m19.691s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 12m10.027s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 9m40.031s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 9m39.693s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 12m30.029s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 10m0.033s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 9m59.696s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 12m50.031s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 10m20.035s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 10m19.698s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 13m10.033s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 10m40.037s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 10m39.7s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:57:36.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 13m30.035s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 11m0.039s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 10m59.702s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:57:38.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:40.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:42.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:44.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:46.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:48.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:50.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:52.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:54.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:56.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 13m50.037s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 11m20.041s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 11m19.703s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:57:58.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:00.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:02.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:04.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:06.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:08.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:10.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:12.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:14.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:16.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 14m10.04s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 11m40.044s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 11m39.706s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:58:18.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:20.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:22.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:24.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:26.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:28.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:30.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:32.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:34.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:36.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 14m30.041s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 12m0.045s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 11m59.708s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:58:38.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:40.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:42.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:44.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:46.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:48.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:50.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:52.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:54.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:56.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 14m50.043s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 12m20.047s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 12m19.71s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:58:58.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:00.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:02.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:04.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:06.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:08.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:10.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:12.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:14.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:16.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 15m10.045s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 12m40.049s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 12m39.712s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:59:18.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:20.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:22.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:24.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:26.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:28.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:30.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:32.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:34.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:36.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 15m30.047s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 13m0.051s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 12m59.714s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:59:38.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:40.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:42.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:44.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:46.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:48.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:50.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:52.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:54.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:56.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 15m50.049s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 13m20.053s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 13m19.716s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:59:58.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:00.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:02.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:04.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:06.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:08.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:10.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:12.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:14.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:16.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 16m10.051s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 13m40.055s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 13m39.718s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 15:00:18.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:20.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:22.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:24.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:26.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:28.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:30.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:32.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:34.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:36.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 16m30.053s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 14m0.057s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 13m59.72s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 15:00:38.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:40.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:42.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:44.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:46.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:48.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:50.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:52.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:54.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:56.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 16m50.055s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 14m20.059s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 14m19.722s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 15:00:58.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:00.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:02.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:04.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:06.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:08.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:10.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:12.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:14.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:16.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 17m10.057s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 14m40.061s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 14m39.724s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 15:01:18.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:20.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:22.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:24.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:26.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:28.334: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:30.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:32.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:34.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:36.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 17m30.06s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 15m0.064s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-5881/external-local-nodes (Step Runtime: 14m59.727s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1197 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a43818, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b8000}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b8000}, 0xc002a21ce0?, 0xc001833a60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0011918c0?, 0x7fa7740?, 0xc00028eb80?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0024b4550, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0024b4550, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0024b4550, 0x6aba880?, 0xc001833d10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0024b4550, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000355680}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 15:01:38.335: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:38.374: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/services/external-local-nodes": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:38.374: INFO: Unexpected error: <*fmt.wrapError | 0xc0010720c0>: { msg: "timed out waiting for service \"external-local-nodes\" to have a load balancer: timed out waiting for the condition", err: <*errors.errorString | 0xc000287c60>{ s: "timed out waiting for the condition", }, } Nov 26 15:01:38.374: FAIL: timed out waiting for service "external-local-nodes" to have a load balancer: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1363 +0x130 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 15:01:38.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 15:01:38.414: INFO: Output of kubectl describe svc: Nov 26 15:01:38.414: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=esipp-5881 describe svc --namespace=esipp-5881' Nov 26 15:01:38.525: INFO: rc: 1 Nov 26 15:01:38.525: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 15:01:38.525 STEP: Collecting events from namespace "esipp-5881". 11/26/22 15:01:38.525 Nov 26 15:01:38.565: INFO: Unexpected error: failed to list events in namespace "esipp-5881": <*url.Error | 0xc003fc4000>: { Op: "Get", URL: "https://34.83.118.239/api/v1/namespaces/esipp-5881/events", Err: <*net.OpError | 0xc001678140>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000aa4360>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 118, 239], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0009da080>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 15:01:38.565: FAIL: failed to list events in namespace "esipp-5881": Get "https://34.83.118.239/api/v1/namespaces/esipp-5881/events": dial tcp 34.83.118.239:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0017285c0, {0xc00044e4f0, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002b6a000}, {0xc00044e4f0, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001728650?, {0xc00044e4f0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc001106000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0012977d0?, 0xc002122f50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0012977d0?, 0x7fadfa0?}, {0xae73300?, 0xc002122f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-5881" for this suite. 11/26/22 15:01:38.565 Nov 26 15:01:38.605: FAIL: Couldn't delete ns: "esipp-5881": Delete "https://34.83.118.239/api/v1/namespaces/esipp-5881": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/esipp-5881", Err:(*net.OpError)(0xc003fd95e0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001106000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0012976f0?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0012976f0?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=LoadBalancer$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000b4a000) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:48:36.916 Nov 26 14:48:36.916: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 14:48:36.918 Nov 26 14:48:36.957: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:38.998: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:40.997: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:42.997: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:44.998: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:46.997: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:48.997: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:50.997: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:52.997: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:54.997: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:56.997: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:58.997: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:00.997: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:02.997: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:04.997: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:06.997: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:07.036: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:07.037: INFO: Unexpected error: <*errors.errorString | 0xc000293c80>: { s: "timed out waiting for the condition", } Nov 26 14:49:07.037: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000b4a000) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 14:49:07.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:49:07.076 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=NodePort$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000285960, {0x75c6f7c, 0x9}, 0xc001cc5aa0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000285960, 0x7fa5804cc0b0?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000285960, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000f10000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 There were additional failures detected after the initial failure: [FAILED] Nov 26 14:48:35.311: failed to list events in namespace "esipp-7642": Get "https://34.83.118.239/api/v1/namespaces/esipp-7642/events": dial tcp 34.83.118.239:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 14:48:35.351: Couldn't delete ns: "esipp-7642": Delete "https://34.83.118.239/api/v1/namespaces/esipp-7642": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/esipp-7642", Err:(*net.OpError)(0xc00261b090)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:46:51.793 Nov 26 14:46:51.793: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 14:46:51.795 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 14:46:52.128 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 14:46:52.248 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work for type=NodePort test/e2e/network/loadbalancer.go:1314 STEP: creating a service esipp-7642/external-local-nodeport with type=NodePort and ExternalTrafficPolicy=Local 11/26/22 14:46:52.528 STEP: creating a pod to be part of the service external-local-nodeport 11/26/22 14:46:52.753 Nov 26 14:46:52.807: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 14:46:52.928: INFO: Found all 1 pods Nov 26 14:46:52.928: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-nodeport-zrz5r] Nov 26 14:46:52.928: INFO: Waiting up to 2m0s for pod "external-local-nodeport-zrz5r" in namespace "esipp-7642" to be "running and ready" Nov 26 14:46:53.000: INFO: Pod "external-local-nodeport-zrz5r": Phase="Pending", Reason="", readiness=false. Elapsed: 71.524321ms Nov 26 14:46:53.000: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-zrz5r' on 'bootstrap-e2e-minion-group-5c8w' to be 'Running' but was 'Pending' Nov 26 14:46:55.072: INFO: Pod "external-local-nodeport-zrz5r": Phase="Running", Reason="", readiness=false. Elapsed: 2.144042838s Nov 26 14:46:55.072: INFO: Error evaluating pod condition running and ready: pod 'external-local-nodeport-zrz5r' on 'bootstrap-e2e-minion-group-5c8w' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:46:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:46:52 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:46:52 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:46:52 +0000 UTC }] Nov 26 14:46:57.070: INFO: Pod "external-local-nodeport-zrz5r": Phase="Running", Reason="", readiness=true. Elapsed: 4.141820734s Nov 26 14:46:57.070: INFO: Pod "external-local-nodeport-zrz5r" satisfied condition "running and ready" Nov 26 14:46:57.070: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-nodeport-zrz5r] STEP: Performing setup for networking test in namespace esipp-7642 11/26/22 14:46:58.242 STEP: creating a selector 11/26/22 14:46:58.242 STEP: Creating the service pods in kubernetes 11/26/22 14:46:58.242 Nov 26 14:46:58.242: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 26 14:46:58.639: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-7642" to be "running and ready" Nov 26 14:46:58.813: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 173.968241ms Nov 26 14:46:58.813: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 14:47:00.875: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236134003s Nov 26 14:47:00.875: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 14:47:02.861: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222017365s Nov 26 14:47:02.861: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 14:47:04.896: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.257308961s Nov 26 14:47:04.896: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:06.918: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.279029228s Nov 26 14:47:06.918: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:08.889: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.25084969s Nov 26 14:47:08.889: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:10.884: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.245280462s Nov 26 14:47:10.884: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:12.887: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.248082587s Nov 26 14:47:12.887: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:14.896: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.257829565s Nov 26 14:47:14.896: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:16.927: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.288709498s Nov 26 14:47:16.927: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:18.892: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.253188301s Nov 26 14:47:18.892: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:20.871: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.232860212s Nov 26 14:47:20.872: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:22.892: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.253314257s Nov 26 14:47:22.892: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:24.891: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.252186622s Nov 26 14:47:24.891: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:26.948: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.308935724s Nov 26 14:47:26.948: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:28.926: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.287254012s Nov 26 14:47:28.926: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:30.884: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.244898245s Nov 26 14:47:30.884: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:32.974: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.334913775s Nov 26 14:47:32.974: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:34.876: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.236898636s Nov 26 14:47:34.876: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:36.941: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.302818067s Nov 26 14:47:36.941: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:38.943: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.304601874s Nov 26 14:47:38.943: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:40.882: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.243004591s Nov 26 14:47:40.882: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:42.960: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.321613213s Nov 26 14:47:42.960: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:44.891: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.251987229s Nov 26 14:47:44.891: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:46.966: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.326914533s Nov 26 14:47:46.966: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:48.878: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.239585943s Nov 26 14:47:48.878: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:50.865: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.226833188s Nov 26 14:47:50.865: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:52.902: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.263216267s Nov 26 14:47:52.902: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:54.881: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.242554709s Nov 26 14:47:54.881: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:56.920: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.281360131s Nov 26 14:47:56.920: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:47:58.863: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.22483105s Nov 26 14:47:58.863: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:48:00.863: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.224832444s Nov 26 14:48:00.863: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:48:02.901: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.262336062s Nov 26 14:48:02.901: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:48:04.885: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.246299647s Nov 26 14:48:04.885: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:48:06.862: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.223580029s Nov 26 14:48:06.862: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:48:08.861: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.222430587s Nov 26 14:48:08.861: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:48:10.872: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.232988853s Nov 26 14:48:10.872: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:48:13.049: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.41068954s Nov 26 14:48:13.049: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 14:48:14.889: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 1m16.250620249s Nov 26 14:48:14.889: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 26 14:48:14.889: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 26 14:48:14.946: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-7642" to be "running and ready" Nov 26 14:48:15.003: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 57.717298ms Nov 26 14:48:15.003: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 14:48:17.062: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.116586598s Nov 26 14:48:17.062: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 14:48:19.177: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.231411283s Nov 26 14:48:19.177: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 14:48:21.088: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 6.141936199s Nov 26 14:48:21.088: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 14:48:23.081: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 8.135497755s Nov 26 14:48:23.081: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 14:48:25.062: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 10.116765032s Nov 26 14:48:25.062: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 14:48:27.091: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 12.145379877s Nov 26 14:48:27.091: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 14:48:29.109: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 14.163766293s Nov 26 14:48:29.109: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 14:48:31.056: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 16.110054926s Nov 26 14:48:31.056: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 14:48:33.060: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 18.114170529s Nov 26 14:48:33.060: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 14:48:35.043: INFO: Encountered non-retryable error while getting pod esipp-7642/netserver-1: Get "https://34.83.118.239/api/v1/namespaces/esipp-7642/pods/netserver-1": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:35.043: INFO: Unexpected error: <*fmt.wrapError | 0xc0003fd920>: { msg: "error while waiting for pod esipp-7642/netserver-1 to be running and ready: Get \"https://34.83.118.239/api/v1/namespaces/esipp-7642/pods/netserver-1\": dial tcp 34.83.118.239:443: connect: connection refused", err: <*url.Error | 0xc001c79cb0>{ Op: "Get", URL: "https://34.83.118.239/api/v1/namespaces/esipp-7642/pods/netserver-1", Err: <*net.OpError | 0xc00261ad70>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001d73f80>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 118, 239], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0003fd8e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Nov 26 14:48:35.043: FAIL: error while waiting for pod esipp-7642/netserver-1 to be running and ready: Get "https://34.83.118.239/api/v1/namespaces/esipp-7642/pods/netserver-1": dial tcp 34.83.118.239:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000285960, {0x75c6f7c, 0x9}, 0xc001cc5aa0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000285960, 0x7fa5804cc0b0?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000285960, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000f10000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 Nov 26 14:48:35.083: INFO: Unexpected error: <*url.Error | 0xc0027ec0c0>: { Op: "Delete", URL: "https://34.83.118.239/api/v1/namespaces/esipp-7642/services/external-local-nodeport", Err: <*net.OpError | 0xc00261ae60>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0027ec090>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 118, 239], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0003fdae0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 14:48:35.083: FAIL: Delete "https://34.83.118.239/api/v1/namespaces/esipp-7642/services/external-local-nodeport": dial tcp 34.83.118.239:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.4.1() test/e2e/network/loadbalancer.go:1323 +0xe7 panic({0x70eb7e0, 0xc000c755e0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc000fc8000, 0xce}, {0xc0009fb7c0?, 0xc000fc8000?, 0xc0009fb7e8?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3f20, 0xc0003fd920}, {0x0?, 0xc000ee6670?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000285960, {0x75c6f7c, 0x9}, 0xc001cc5aa0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000285960, 0x7fa5804cc0b0?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000285960, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000f10000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 14:48:35.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 14:48:35.123: INFO: Output of kubectl describe svc: Nov 26 14:48:35.123: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=esipp-7642 describe svc --namespace=esipp-7642' Nov 26 14:48:35.271: INFO: rc: 1 Nov 26 14:48:35.271: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:48:35.271 STEP: Collecting events from namespace "esipp-7642". 11/26/22 14:48:35.271 Nov 26 14:48:35.311: INFO: Unexpected error: failed to list events in namespace "esipp-7642": <*url.Error | 0xc002670f00>: { Op: "Get", URL: "https://34.83.118.239/api/v1/namespaces/esipp-7642/events", Err: <*net.OpError | 0xc0028b3c20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002936690>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 118, 239], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004146e20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 14:48:35.311: FAIL: failed to list events in namespace "esipp-7642": Get "https://34.83.118.239/api/v1/namespaces/esipp-7642/events": dial tcp 34.83.118.239:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0009f65c0, {0xc000ee6670, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002463040}, {0xc000ee6670, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0009f6650?, {0xc000ee6670?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000f10000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00148e1b0?, 0xc000d62f50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00148e1b0?, 0x7fadfa0?}, {0xae73300?, 0xc000d62f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-7642" for this suite. 11/26/22 14:48:35.311 Nov 26 14:48:35.351: FAIL: Couldn't delete ns: "esipp-7642": Delete "https://34.83.118.239/api/v1/namespaces/esipp-7642": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/esipp-7642", Err:(*net.OpError)(0xc00261b090)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000f10000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00148e100?, 0x6563786520656e69?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x6961776120656c69?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00148e100?, 0x363220766f4e0a22?}, {0xae73300?, 0x49203a3436382e34?, 0x6b6f50203a4f464e?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfrom\spods$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0007d0000) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:53:40.088 Nov 26 14:53:40.088: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 14:53:40.089 Nov 26 14:53:40.129: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:42.168: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:44.169: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:46.168: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:48.169: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:50.169: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:52.169: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:54.168: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:56.168: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:58.168: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:56:03.395: INFO: Unexpected error: <*fmt.wrapError | 0xc0034d6000>: { msg: "wait for service account \"default\" in namespace \"esipp-234\": timed out waiting for the condition", err: <*errors.errorString | 0xc000241a00>{ s: "timed out waiting for the condition", }, } Nov 26 14:56:03.395: FAIL: wait for service account "default" in namespace "esipp-234": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0007d0000) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 14:56:03.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:56:03.479 STEP: Collecting events from namespace "esipp-234". 11/26/22 14:56:03.479 STEP: Found 0 events. 11/26/22 14:56:03.52 Nov 26 14:56:03.568: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 14:56:03.568: INFO: Nov 26 14:56:03.612: INFO: Logging node info for node bootstrap-e2e-master Nov 26 14:56:03.658: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 39f76886-2c7b-440b-9ef2-f11a2bfefeb1 7112 0 2022-11-26 14:37:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 14:37:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 14:54:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-09/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 14:37:20 +0000 UTC,LastTransitionTime:2022-11-26 14:37:20 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 14:54:07 +0000 UTC,LastTransitionTime:2022-11-26 14:37:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 14:54:07 +0000 UTC,LastTransitionTime:2022-11-26 14:37:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 14:54:07 +0000 UTC,LastTransitionTime:2022-11-26 14:37:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 14:54:07 +0000 UTC,LastTransitionTime:2022-11-26 14:37:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.118.239,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-09.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-09.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e3e070cfc8d4db9e880daaf5c4a65019,SystemUUID:e3e070cf-c8d4-db9e-880d-aaf5c4a65019,BootID:ff2901fa-ae19-4286-a08b-110e9e385f96,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 14:56:03.658: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 14:56:03.918: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 14:56:04.548: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:04.548: INFO: Container kube-controller-manager ready: false, restart count 6 Nov 26 14:56:04.548: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:04.548: INFO: Container etcd-container ready: true, restart count 1 Nov 26 14:56:04.548: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:04.548: INFO: Container etcd-container ready: true, restart count 2 Nov 26 14:56:04.548: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 14:36:47 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:04.548: INFO: Container kube-addon-manager ready: true, restart count 2 Nov 26 14:56:04.548: INFO: metadata-proxy-v0.1-b48tm started at 2022-11-26 14:37:16 +0000 UTC (0+2 container statuses recorded) Nov 26 14:56:04.548: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 14:56:04.548: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 14:56:04.548: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:04.548: INFO: Container kube-apiserver ready: true, restart count 4 Nov 26 14:56:04.548: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:04.548: INFO: Container konnectivity-server-container ready: true, restart count 3 Nov 26 14:56:04.548: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 14:36:47 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:04.548: INFO: Container l7-lb-controller ready: true, restart count 7 Nov 26 14:56:04.548: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:04.548: INFO: Container kube-scheduler ready: true, restart count 5 Nov 26 14:56:05.296: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 14:56:05.296: INFO: Logging node info for node bootstrap-e2e-minion-group-5c8w Nov 26 14:56:05.343: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-5c8w 9c1c2738-39d8-4fbd-8eb9-cd823476dc17 7107 0 2022-11-26 14:37:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-5c8w kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-5c8w topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-1233":"bootstrap-e2e-minion-group-5c8w","csi-hostpath-provisioning-9442":"bootstrap-e2e-minion-group-5c8w"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 14:37:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 14:47:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 14:54:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {node-problem-detector Update v1 2022-11-26 14:54:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-09/us-west1-b/bootstrap-e2e-minion-group-5c8w,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 14:54:07 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 14:54:07 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 14:54:07 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 14:54:07 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 14:54:07 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 14:54:07 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 14:54:07 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 14:37:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:33 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 14:54:03 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 14:54:03 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 14:54:03 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 14:54:03 +0000 UTC,LastTransitionTime:2022-11-26 14:37:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.230.112.32,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-5c8w.c.k8s-boskos-gce-project-09.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-5c8w.c.k8s-boskos-gce-project-09.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f8110843d1932588ce84ecdf8f74c3c9,SystemUUID:f8110843-d193-2588-ce84-ecdf8f74c3c9,BootID:ebb46ff0-b8ec-405f-80a1-fbaa69879823,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-6811^3ec72831-6d98-11ed-92c7-5e18ea5c5386],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-6811^3ec72831-6d98-11ed-92c7-5e18ea5c5386,DevicePath:,},},Config:nil,},} Nov 26 14:56:05.344: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-5c8w Nov 26 14:56:05.410: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-5c8w Nov 26 14:56:05.530: INFO: netserver-0 started at 2022-11-26 14:39:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container webserver ready: true, restart count 7 Nov 26 14:56:05.530: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-9s9ds started at 2022-11-26 14:39:43 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 14:56:05.530: INFO: pod-6be3caae-2380-4995-afed-16e4c49357fb started at 2022-11-26 14:39:54 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container write-pod ready: false, restart count 0 Nov 26 14:56:05.530: INFO: pod-subpath-test-preprovisionedpv-tdgq started at 2022-11-26 14:43:30 +0000 UTC (1+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Init container init-volume-preprovisionedpv-tdgq ready: true, restart count 0 Nov 26 14:56:05.530: INFO: Container test-container-subpath-preprovisionedpv-tdgq ready: false, restart count 0 Nov 26 14:56:05.530: INFO: pod-subpath-test-preprovisionedpv-cq4s started at 2022-11-26 14:43:30 +0000 UTC (1+2 container statuses recorded) Nov 26 14:56:05.530: INFO: Init container init-volume-preprovisionedpv-cq4s ready: true, restart count 2 Nov 26 14:56:05.530: INFO: Container test-container-subpath-preprovisionedpv-cq4s ready: true, restart count 4 Nov 26 14:56:05.530: INFO: Container test-container-volume-preprovisionedpv-cq4s ready: true, restart count 4 Nov 26 14:56:05.530: INFO: nfs-server started at 2022-11-26 14:47:59 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container nfs-server ready: true, restart count 4 Nov 26 14:56:05.530: INFO: pod-subpath-test-preprovisionedpv-v4q9 started at 2022-11-26 14:43:29 +0000 UTC (1+2 container statuses recorded) Nov 26 14:56:05.530: INFO: Init container init-volume-preprovisionedpv-v4q9 ready: true, restart count 1 Nov 26 14:56:05.530: INFO: Container test-container-subpath-preprovisionedpv-v4q9 ready: true, restart count 1 Nov 26 14:56:05.530: INFO: Container test-container-volume-preprovisionedpv-v4q9 ready: true, restart count 1 Nov 26 14:56:05.530: INFO: test-hostpath-type-q6kwb started at 2022-11-26 14:48:00 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 14:56:05.530: INFO: pod-secrets-c0028de5-c492-43de-938c-42980069e4c7 started at 2022-11-26 14:48:19 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 14:56:05.530: INFO: httpd started at 2022-11-26 14:47:37 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container httpd ready: true, restart count 3 Nov 26 14:56:05.530: INFO: pod-configmaps-b9afde7b-7c4e-490c-8c68-d6d469b0b445 started at 2022-11-26 14:47:45 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 14:56:05.530: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-wks5s started at 2022-11-26 14:40:17 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 14:56:05.530: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-jjgn4 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 14:56:05.530: INFO: pod-subpath-test-inlinevolume-n4pj started at 2022-11-26 14:43:33 +0000 UTC (1+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Init container init-volume-inlinevolume-n4pj ready: true, restart count 0 Nov 26 14:56:05.530: INFO: Container test-container-subpath-inlinevolume-n4pj ready: false, restart count 0 Nov 26 14:56:05.530: INFO: pod-subpath-test-inlinevolume-x7q7 started at 2022-11-26 14:48:14 +0000 UTC (1+2 container statuses recorded) Nov 26 14:56:05.530: INFO: Init container init-volume-inlinevolume-x7q7 ready: true, restart count 0 Nov 26 14:56:05.530: INFO: Container test-container-subpath-inlinevolume-x7q7 ready: true, restart count 4 Nov 26 14:56:05.530: INFO: Container test-container-volume-inlinevolume-x7q7 ready: true, restart count 3 Nov 26 14:56:05.530: INFO: konnectivity-agent-cnxt9 started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container konnectivity-agent ready: false, restart count 7 Nov 26 14:56:05.530: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-478nt started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container agnhost-container ready: false, restart count 3 Nov 26 14:56:05.530: INFO: pod-secrets-3119e1dd-0b42-4a96-8805-8da9054d7dcf started at 2022-11-26 14:47:41 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 14:56:05.530: INFO: test-container-pod started at 2022-11-26 14:48:05 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container webserver ready: true, restart count 2 Nov 26 14:56:05.530: INFO: pod-7dbe6380-f5d0-4852-b8bf-7231eca57b67 started at 2022-11-26 14:40:23 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container write-pod ready: false, restart count 0 Nov 26 14:56:05.530: INFO: mutability-test-fxk7p started at 2022-11-26 14:41:30 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container netexec ready: false, restart count 5 Nov 26 14:56:05.530: INFO: pod-subpath-test-preprovisionedpv-wqts started at 2022-11-26 14:43:30 +0000 UTC (1+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Init container init-volume-preprovisionedpv-wqts ready: true, restart count 0 Nov 26 14:56:05.530: INFO: Container test-container-subpath-preprovisionedpv-wqts ready: false, restart count 0 Nov 26 14:56:05.530: INFO: external-local-nodeport-zrz5r started at 2022-11-26 14:46:52 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container netexec ready: true, restart count 4 Nov 26 14:56:05.530: INFO: host-test-container-pod started at 2022-11-26 14:48:05 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 14:56:05.530: INFO: metrics-server-v0.5.2-867b8754b9-vrm2k started at 2022-11-26 14:38:16 +0000 UTC (0+2 container statuses recorded) Nov 26 14:56:05.530: INFO: Container metrics-server ready: false, restart count 6 Nov 26 14:56:05.530: INFO: Container metrics-server-nanny ready: false, restart count 6 Nov 26 14:56:05.530: INFO: failure-3 started at 2022-11-26 14:39:52 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container failure-3 ready: true, restart count 3 Nov 26 14:56:05.530: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:40:09 +0000 UTC (0+7 container statuses recorded) Nov 26 14:56:05.530: INFO: Container csi-attacher ready: false, restart count 6 Nov 26 14:56:05.530: INFO: Container csi-provisioner ready: false, restart count 6 Nov 26 14:56:05.530: INFO: Container csi-resizer ready: false, restart count 6 Nov 26 14:56:05.530: INFO: Container csi-snapshotter ready: false, restart count 6 Nov 26 14:56:05.530: INFO: Container hostpath ready: false, restart count 6 Nov 26 14:56:05.530: INFO: Container liveness-probe ready: false, restart count 6 Nov 26 14:56:05.530: INFO: Container node-driver-registrar ready: false, restart count 6 Nov 26 14:56:05.530: INFO: netserver-0 started at 2022-11-26 14:46:38 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container webserver ready: false, restart count 4 Nov 26 14:56:05.530: INFO: test-hostpath-type-82b5k started at 2022-11-26 14:48:04 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 14:56:05.530: INFO: test-hostpath-type-9pwvm started at 2022-11-26 14:48:30 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 14:56:05.530: INFO: metadata-proxy-v0.1-xb4cm started at 2022-11-26 14:37:18 +0000 UTC (0+2 container statuses recorded) Nov 26 14:56:05.530: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 14:56:05.530: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 14:56:05.530: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-46mpd started at 2022-11-26 14:43:13 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container agnhost-container ready: false, restart count 5 Nov 26 14:56:05.530: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-cswzg started at 2022-11-26 14:39:31 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container agnhost-container ready: false, restart count 7 Nov 26 14:56:05.530: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-c556l started at 2022-11-26 14:40:05 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container agnhost-container ready: true, restart count 4 Nov 26 14:56:05.530: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-74vq9 started at 2022-11-26 14:43:30 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 14:56:05.530: INFO: test-hostpath-type-kklb4 started at 2022-11-26 14:48:22 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 14:56:05.530: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:47:05 +0000 UTC (0+7 container statuses recorded) Nov 26 14:56:05.530: INFO: Container csi-attacher ready: false, restart count 4 Nov 26 14:56:05.530: INFO: Container csi-provisioner ready: false, restart count 4 Nov 26 14:56:05.530: INFO: Container csi-resizer ready: false, restart count 4 Nov 26 14:56:05.530: INFO: Container csi-snapshotter ready: false, restart count 4 Nov 26 14:56:05.530: INFO: Container hostpath ready: false, restart count 4 Nov 26 14:56:05.530: INFO: Container liveness-probe ready: false, restart count 4 Nov 26 14:56:05.530: INFO: Container node-driver-registrar ready: false, restart count 5 Nov 26 14:56:05.530: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:43:16 +0000 UTC (0+7 container statuses recorded) Nov 26 14:56:05.530: INFO: Container csi-attacher ready: true, restart count 5 Nov 26 14:56:05.530: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 14:56:05.530: INFO: Container csi-resizer ready: true, restart count 5 Nov 26 14:56:05.530: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 26 14:56:05.530: INFO: Container hostpath ready: true, restart count 5 Nov 26 14:56:05.530: INFO: Container liveness-probe ready: true, restart count 5 Nov 26 14:56:05.530: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 26 14:56:05.530: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-h5pvn started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 14:56:05.530: INFO: netserver-0 started at 2022-11-26 14:46:58 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container webserver ready: false, restart count 4 Nov 26 14:56:05.530: INFO: external-provisioner-lmbdb started at 2022-11-26 14:47:53 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container nfs-provisioner ready: false, restart count 6 Nov 26 14:56:05.530: INFO: kube-proxy-bootstrap-e2e-minion-group-5c8w started at 2022-11-26 14:37:18 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container kube-proxy ready: false, restart count 6 Nov 26 14:56:05.530: INFO: pod-secrets-5fdd18ad-0588-44cf-82a3-528f3248be63 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 14:56:05.530: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-ljxwk started at 2022-11-26 14:48:01 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 14:56:05.530: INFO: failure-2 started at 2022-11-26 14:48:06 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:05.530: INFO: Container failure-2 ready: true, restart count 2 Nov 26 14:56:07.801: INFO: Latency metrics for node bootstrap-e2e-minion-group-5c8w Nov 26 14:56:07.801: INFO: Logging node info for node bootstrap-e2e-minion-group-90df Nov 26 14:56:07.856: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-90df 47ba7cef-c7a9-42dc-a972-e2581f5476da 7274 0 2022-11-26 14:37:19 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-90df kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-90df topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-1883":"bootstrap-e2e-minion-group-90df","csi-hostpath-provisioning-8398":"bootstrap-e2e-minion-group-90df","csi-mock-csi-mock-volumes-1468":"csi-mock-csi-mock-volumes-1468","csi-mock-csi-mock-volumes-6396":"bootstrap-e2e-minion-group-90df"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 14:37:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 14:47:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 14:54:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 14:55:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-09/us-west1-b/bootstrap-e2e-minion-group-90df,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 14:54:09 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 14:54:09 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 14:54:09 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 14:54:09 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 14:54:09 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 14:54:09 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 14:54:09 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 14:37:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:33 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 14:54:02 +0000 UTC,LastTransitionTime:2022-11-26 14:37:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 14:54:02 +0000 UTC,LastTransitionTime:2022-11-26 14:37:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 14:54:02 +0000 UTC,LastTransitionTime:2022-11-26 14:37:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 14:54:02 +0000 UTC,LastTransitionTime:2022-11-26 14:37:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.184.32,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-90df.c.k8s-boskos-gce-project-09.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-90df.c.k8s-boskos-gce-project-09.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:83ac01160a0c57758b5edb61ebb59ab4,SystemUUID:83ac0116-0a0c-5775-8b5e-db61ebb59ab4,BootID:8fc388ac-8473-4c3d-8b39-12dca64dff04,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 14:56:07.857: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-90df Nov 26 14:56:07.903: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-90df Nov 26 14:56:08.131: INFO: httpd started at 2022-11-26 14:39:18 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container httpd ready: false, restart count 6 Nov 26 14:56:08.131: INFO: csi-mockplugin-0 started at 2022-11-26 14:47:43 +0000 UTC (0+3 container statuses recorded) Nov 26 14:56:08.131: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 14:56:08.131: INFO: Container driver-registrar ready: true, restart count 3 Nov 26 14:56:08.131: INFO: Container mock ready: true, restart count 3 Nov 26 14:56:08.131: INFO: hostpath-symlink-prep-provisioning-7611 started at 2022-11-26 14:41:44 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container init-volume-provisioning-7611 ready: false, restart count 0 Nov 26 14:56:08.131: INFO: external-local-update-92js7 started at 2022-11-26 14:41:45 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container netexec ready: true, restart count 5 Nov 26 14:56:08.131: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:47:21 +0000 UTC (0+7 container statuses recorded) Nov 26 14:56:08.131: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 14:56:08.131: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 14:56:08.131: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 14:56:08.131: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 14:56:08.131: INFO: Container hostpath ready: true, restart count 2 Nov 26 14:56:08.131: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 14:56:08.131: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 26 14:56:08.131: INFO: pod-9d4d56c4-abaf-4fe6-9f2b-9c75653f01f4 started at 2022-11-26 14:48:21 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container write-pod ready: false, restart count 0 Nov 26 14:56:08.131: INFO: volume-snapshot-controller-0 started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container volume-snapshot-controller ready: false, restart count 7 Nov 26 14:56:08.131: INFO: hostexec-bootstrap-e2e-minion-group-90df-ql9k6 started at 2022-11-26 14:41:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container agnhost-container ready: true, restart count 5 Nov 26 14:56:08.131: INFO: execpod-drops5lkl started at 2022-11-26 14:43:22 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container agnhost-container ready: true, restart count 4 Nov 26 14:56:08.131: INFO: lb-sourcerange-9gzfn started at 2022-11-26 14:43:29 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container netexec ready: false, restart count 7 Nov 26 14:56:08.131: INFO: httpd started at 2022-11-26 14:41:18 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container httpd ready: true, restart count 6 Nov 26 14:56:08.131: INFO: hostexec-bootstrap-e2e-minion-group-90df-zwwz7 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 14:56:08.131: INFO: netserver-1 started at 2022-11-26 14:46:58 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container webserver ready: true, restart count 5 Nov 26 14:56:08.131: INFO: metadata-proxy-v0.1-ghfq8 started at 2022-11-26 14:37:20 +0000 UTC (0+2 container statuses recorded) Nov 26 14:56:08.131: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 14:56:08.131: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 14:56:08.131: INFO: konnectivity-agent-8rxr7 started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container konnectivity-agent ready: false, restart count 6 Nov 26 14:56:08.131: INFO: csi-mockplugin-0 started at 2022-11-26 14:39:19 +0000 UTC (0+4 container statuses recorded) Nov 26 14:56:08.131: INFO: Container busybox ready: true, restart count 3 Nov 26 14:56:08.131: INFO: Container csi-provisioner ready: true, restart count 6 Nov 26 14:56:08.131: INFO: Container driver-registrar ready: true, restart count 4 Nov 26 14:56:08.131: INFO: Container mock ready: true, restart count 4 Nov 26 14:56:08.131: INFO: pod-subpath-test-preprovisionedpv-bhqw started at 2022-11-26 14:43:29 +0000 UTC (1+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Init container init-volume-preprovisionedpv-bhqw ready: true, restart count 0 Nov 26 14:56:08.131: INFO: Container test-container-subpath-preprovisionedpv-bhqw ready: false, restart count 0 Nov 26 14:56:08.131: INFO: l7-default-backend-8549d69d99-s4b5m started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 14:56:08.131: INFO: pod-subpath-test-inlinevolume-7q7t started at 2022-11-26 14:41:11 +0000 UTC (1+2 container statuses recorded) Nov 26 14:56:08.131: INFO: Init container init-volume-inlinevolume-7q7t ready: true, restart count 2 Nov 26 14:56:08.131: INFO: Container test-container-subpath-inlinevolume-7q7t ready: false, restart count 6 Nov 26 14:56:08.131: INFO: Container test-container-volume-inlinevolume-7q7t ready: false, restart count 5 Nov 26 14:56:08.131: INFO: netserver-1 started at 2022-11-26 14:39:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container webserver ready: true, restart count 6 Nov 26 14:56:08.131: INFO: hostexec-bootstrap-e2e-minion-group-90df-42ljh started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 14:56:08.131: INFO: execpod-accepttbp6q started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container agnhost-container ready: true, restart count 4 Nov 26 14:56:08.131: INFO: lb-internal-bqdsf started at 2022-11-26 14:46:38 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container netexec ready: false, restart count 5 Nov 26 14:56:08.131: INFO: hostexec-bootstrap-e2e-minion-group-90df-px44z started at 2022-11-26 14:48:16 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:56:08.131: INFO: kube-proxy-bootstrap-e2e-minion-group-90df started at 2022-11-26 14:37:19 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container kube-proxy ready: false, restart count 6 Nov 26 14:56:08.131: INFO: coredns-6d97d5ddb-thsmq started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container coredns ready: false, restart count 7 Nov 26 14:56:08.131: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:47:23 +0000 UTC (0+7 container statuses recorded) Nov 26 14:56:08.131: INFO: Container csi-attacher ready: true, restart count 1 Nov 26 14:56:08.131: INFO: Container csi-provisioner ready: true, restart count 1 Nov 26 14:56:08.131: INFO: Container csi-resizer ready: true, restart count 1 Nov 26 14:56:08.131: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 26 14:56:08.131: INFO: Container hostpath ready: true, restart count 1 Nov 26 14:56:08.131: INFO: Container liveness-probe ready: true, restart count 1 Nov 26 14:56:08.131: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 26 14:56:08.131: INFO: kube-dns-autoscaler-5f6455f985-g8dtn started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container autoscaler ready: false, restart count 6 Nov 26 14:56:08.131: INFO: pod-secrets-ca2e8eea-812f-46de-9d25-74cfeb71e013 started at 2022-11-26 14:41:16 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.131: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 14:56:08.131: INFO: mutability-test-pdxr6 started at 2022-11-26 14:40:05 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.132: INFO: Container netexec ready: true, restart count 6 Nov 26 14:56:08.132: INFO: failure-4 started at 2022-11-26 14:41:30 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.132: INFO: Container failure-4 ready: false, restart count 0 Nov 26 14:56:08.132: INFO: pod-subpath-test-preprovisionedpv-n6nd started at 2022-11-26 14:43:29 +0000 UTC (1+1 container statuses recorded) Nov 26 14:56:08.132: INFO: Init container init-volume-preprovisionedpv-n6nd ready: true, restart count 0 Nov 26 14:56:08.132: INFO: Container test-container-subpath-preprovisionedpv-n6nd ready: false, restart count 0 Nov 26 14:56:08.132: INFO: hostexec-bootstrap-e2e-minion-group-90df-xp8d8 started at 2022-11-26 14:43:31 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.132: INFO: Container agnhost-container ready: false, restart count 4 Nov 26 14:56:08.132: INFO: netserver-1 started at 2022-11-26 14:46:38 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.132: INFO: Container webserver ready: false, restart count 5 Nov 26 14:56:08.132: INFO: back-off-cap started at 2022-11-26 14:41:30 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:08.132: INFO: Container back-off-cap ready: false, restart count 7 Nov 26 14:56:09.218: INFO: Latency metrics for node bootstrap-e2e-minion-group-90df Nov 26 14:56:09.218: INFO: Logging node info for node bootstrap-e2e-minion-group-r2mh Nov 26 14:56:09.261: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-r2mh 1a723982-de14-44dd-ba83-f2a219df5b69 7343 0 2022-11-26 14:37:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-r2mh kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-r2mh topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2338":"bootstrap-e2e-minion-group-r2mh","csi-hostpath-multivolume-3585":"bootstrap-e2e-minion-group-r2mh","csi-hostpath-provisioning-9596":"bootstrap-e2e-minion-group-r2mh"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 14:37:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 14:47:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 14:54:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 14:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-09/us-west1-b/bootstrap-e2e-minion-group-r2mh,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 14:54:10 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 14:54:10 +0000 UTC,LastTransitionTime:2022-11-26 14:37:20 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 14:54:10 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 14:54:10 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 14:54:10 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 14:54:10 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 14:54:10 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 14:37:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:33 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 14:54:12 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 14:54:12 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 14:54:12 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 14:54:12 +0000 UTC,LastTransitionTime:2022-11-26 14:37:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.230.108.57,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-r2mh.c.k8s-boskos-gce-project-09.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-r2mh.c.k8s-boskos-gce-project-09.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b02b78c4c4f55f886d1255b57a8f72a9,SystemUUID:b02b78c4-c4f5-5f88-6d12-55b57a8f72a9,BootID:89f45761-3bf9-44b7-ab35-4ef95f8fa75c,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-3585^2695f7b3-6d98-11ed-b37c-ae387406f163,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9596^26926834-6d98-11ed-bd96-62e2a582563d,DevicePath:,},},Config:nil,},} Nov 26 14:56:09.261: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-r2mh Nov 26 14:56:09.304: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-r2mh Nov 26 14:56:09.590: INFO: pod-configmaps-083a45f1-1cc7-4319-bec1-83b30373c023 started at 2022-11-26 14:39:17 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:09.590: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 14:56:09.590: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:39:30 +0000 UTC (0+7 container statuses recorded) Nov 26 14:56:09.590: INFO: Container csi-attacher ready: true, restart count 7 Nov 26 14:56:09.590: INFO: Container csi-provisioner ready: true, restart count 7 Nov 26 14:56:09.590: INFO: Container csi-resizer ready: true, restart count 7 Nov 26 14:56:09.590: INFO: Container csi-snapshotter ready: true, restart count 7 Nov 26 14:56:09.590: INFO: Container hostpath ready: true, restart count 7 Nov 26 14:56:09.590: INFO: Container liveness-probe ready: true, restart count 7 Nov 26 14:56:09.590: INFO: Container node-driver-registrar ready: true, restart count 7 Nov 26 14:56:09.590: INFO: netserver-2 started at 2022-11-26 14:39:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:09.590: INFO: Container webserver ready: true, restart count 3 Nov 26 14:56:09.590: INFO: metadata-proxy-v0.1-66r9l started at 2022-11-26 14:37:18 +0000 UTC (0+2 container statuses recorded) Nov 26 14:56:09.590: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 14:56:09.590: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 14:56:09.590: INFO: pod-120effda-3138-4ce8-9b4f-08806b37e6a7 started at 2022-11-26 14:43:36 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:09.590: INFO: Container write-pod ready: false, restart count 0 Nov 26 14:56:09.590: INFO: csi-mockplugin-0 started at 2022-11-26 14:40:16 +0000 UTC (0+4 container statuses recorded) Nov 26 14:56:09.590: INFO: Container busybox ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container csi-provisioner ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container driver-registrar ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container mock ready: true, restart count 6 Nov 26 14:56:09.590: INFO: netserver-2 started at 2022-11-26 14:46:38 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:09.590: INFO: Container webserver ready: true, restart count 3 Nov 26 14:56:09.590: INFO: coredns-6d97d5ddb-wmgqj started at 2022-11-26 14:37:40 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:09.590: INFO: Container coredns ready: false, restart count 6 Nov 26 14:56:09.590: INFO: netserver-2 started at 2022-11-26 14:46:58 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:09.590: INFO: Container webserver ready: true, restart count 3 Nov 26 14:56:09.590: INFO: konnectivity-agent-tb7mp started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:09.590: INFO: Container konnectivity-agent ready: true, restart count 6 Nov 26 14:56:09.590: INFO: hostexec-bootstrap-e2e-minion-group-r2mh-jx5n7 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:09.590: INFO: Container agnhost-container ready: true, restart count 5 Nov 26 14:56:09.590: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:39:19 +0000 UTC (0+7 container statuses recorded) Nov 26 14:56:09.590: INFO: Container csi-attacher ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container csi-provisioner ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container csi-resizer ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container csi-snapshotter ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container hostpath ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container liveness-probe ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container node-driver-registrar ready: true, restart count 6 Nov 26 14:56:09.590: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:40:10 +0000 UTC (0+7 container statuses recorded) Nov 26 14:56:09.590: INFO: Container csi-attacher ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container csi-provisioner ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container csi-resizer ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container csi-snapshotter ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container hostpath ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container liveness-probe ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container node-driver-registrar ready: true, restart count 6 Nov 26 14:56:09.590: INFO: kube-proxy-bootstrap-e2e-minion-group-r2mh started at 2022-11-26 14:37:17 +0000 UTC (0+1 container statuses recorded) Nov 26 14:56:09.590: INFO: Container kube-proxy ready: false, restart count 7 Nov 26 14:56:09.590: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:46:40 +0000 UTC (0+7 container statuses recorded) Nov 26 14:56:09.590: INFO: Container csi-attacher ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container csi-provisioner ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container csi-resizer ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container csi-snapshotter ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container hostpath ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container liveness-probe ready: true, restart count 6 Nov 26 14:56:09.590: INFO: Container node-driver-registrar ready: false, restart count 5 Nov 26 14:56:10.395: INFO: Latency metrics for node bootstrap-e2e-minion-group-r2mh [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-234" for this suite. 11/26/22 14:56:10.395
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sTCP\sservice\s\[Slow\]$'
test/e2e/framework/service/util.go:48 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:48 +0x265 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 +0x11ce There were additional failures detected after the initial failure: [FAILED] Nov 26 14:51:46.773: failed to list events in namespace "loadbalancers-162": Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/events": dial tcp 34.83.118.239:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 14:51:46.813: Couldn't delete ns: "loadbalancers-162": Delete "https://34.83.118.239/api/v1/namespaces/loadbalancers-162": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/loadbalancers-162", Err:(*net.OpError)(0xc002213c20)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:40:05.091 Nov 26 14:40:05.091: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 14:40:05.093 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 14:40:05.218 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 14:40:05.298 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to change the type and ports of a TCP service [Slow] test/e2e/network/loadbalancer.go:77 Nov 26 14:40:05.467: INFO: namespace for TCP test: loadbalancers-162 STEP: creating a TCP service mutability-test with type=ClusterIP in namespace loadbalancers-162 11/26/22 14:40:05.514 Nov 26 14:40:05.566: INFO: service port TCP: 80 STEP: creating a pod to be part of the TCP service mutability-test 11/26/22 14:40:05.567 Nov 26 14:40:05.612: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 14:40:05.654: INFO: Found all 1 pods Nov 26 14:40:05.654: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [mutability-test-pdxr6] Nov 26 14:40:05.654: INFO: Waiting up to 2m0s for pod "mutability-test-pdxr6" in namespace "loadbalancers-162" to be "running and ready" Nov 26 14:40:05.695: INFO: Pod "mutability-test-pdxr6": Phase="Pending", Reason="", readiness=false. Elapsed: 40.526109ms Nov 26 14:40:05.695: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-pdxr6' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:40:07.739: INFO: Pod "mutability-test-pdxr6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084365802s Nov 26 14:40:07.739: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-pdxr6' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:40:09.738: INFO: Pod "mutability-test-pdxr6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083859077s Nov 26 14:40:09.738: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-pdxr6' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:40:11.737: INFO: Pod "mutability-test-pdxr6": Phase="Running", Reason="", readiness=false. Elapsed: 6.082511138s Nov 26 14:40:11.737: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-pdxr6' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:40:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:40:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:40:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:40:05 +0000 UTC }] Nov 26 14:40:13.743: INFO: Pod "mutability-test-pdxr6": Phase="Running", Reason="", readiness=false. Elapsed: 8.088351567s Nov 26 14:40:13.743: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-pdxr6' on 'bootstrap-e2e-minion-group-90df' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:40:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:40:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:40:05 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:40:05 +0000 UTC }] Nov 26 14:40:15.736: INFO: Pod "mutability-test-pdxr6": Phase="Running", Reason="", readiness=true. Elapsed: 10.08173058s Nov 26 14:40:15.736: INFO: Pod "mutability-test-pdxr6" satisfied condition "running and ready" Nov 26 14:40:15.736: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [mutability-test-pdxr6] STEP: changing the TCP service to type=NodePort 11/26/22 14:40:15.736 Nov 26 14:40:15.831: INFO: TCP node port: 30781 STEP: hitting the TCP service's NodePort 11/26/22 14:40:15.831 Nov 26 14:40:15.831: INFO: Poking "http://35.230.112.32:30781/echo?msg=hello" Nov 26 14:40:15.872: INFO: Poke("http://35.230.112.32:30781/echo?msg=hello"): Get "http://35.230.112.32:30781/echo?msg=hello": dial tcp 35.230.112.32:30781: connect: connection refused Nov 26 14:40:17.872: INFO: Poking "http://35.230.112.32:30781/echo?msg=hello" Nov 26 14:40:17.912: INFO: Poke("http://35.230.112.32:30781/echo?msg=hello"): Get "http://35.230.112.32:30781/echo?msg=hello": dial tcp 35.230.112.32:30781: connect: connection refused Nov 26 14:40:19.872: INFO: Poking "http://35.230.112.32:30781/echo?msg=hello" Nov 26 14:40:19.912: INFO: Poke("http://35.230.112.32:30781/echo?msg=hello"): Get "http://35.230.112.32:30781/echo?msg=hello": dial tcp 35.230.112.32:30781: connect: connection refused Nov 26 14:40:21.872: INFO: Poking "http://35.230.112.32:30781/echo?msg=hello" Nov 26 14:40:21.912: INFO: Poke("http://35.230.112.32:30781/echo?msg=hello"): Get "http://35.230.112.32:30781/echo?msg=hello": dial tcp 35.230.112.32:30781: connect: connection refused Nov 26 14:40:23.872: INFO: Poking "http://35.230.112.32:30781/echo?msg=hello" Nov 26 14:40:23.915: INFO: Poke("http://35.230.112.32:30781/echo?msg=hello"): Get "http://35.230.112.32:30781/echo?msg=hello": dial tcp 35.230.112.32:30781: connect: connection refused Nov 26 14:40:25.873: INFO: Poking "http://35.230.112.32:30781/echo?msg=hello" Nov 26 14:40:25.913: INFO: Poke("http://35.230.112.32:30781/echo?msg=hello"): Get "http://35.230.112.32:30781/echo?msg=hello": dial tcp 35.230.112.32:30781: connect: connection refused Nov 26 14:40:27.872: INFO: Poking "http://35.230.112.32:30781/echo?msg=hello" Nov 26 14:40:27.912: INFO: Poke("http://35.230.112.32:30781/echo?msg=hello"): Get "http://35.230.112.32:30781/echo?msg=hello": dial tcp 35.230.112.32:30781: connect: connection refused Nov 26 14:40:29.872: INFO: Poking "http://35.230.112.32:30781/echo?msg=hello" Nov 26 14:40:29.912: INFO: Poke("http://35.230.112.32:30781/echo?msg=hello"): Get "http://35.230.112.32:30781/echo?msg=hello": dial tcp 35.230.112.32:30781: connect: connection refused Nov 26 14:40:31.873: INFO: Poking "http://35.230.112.32:30781/echo?msg=hello" Nov 26 14:40:31.912: INFO: Poke("http://35.230.112.32:30781/echo?msg=hello"): Get "http://35.230.112.32:30781/echo?msg=hello": dial tcp 35.230.112.32:30781: connect: connection refused Nov 26 14:40:33.872: INFO: Poking "http://35.230.112.32:30781/echo?msg=hello" Nov 26 14:40:33.916: INFO: Poke("http://35.230.112.32:30781/echo?msg=hello"): Get "http://35.230.112.32:30781/echo?msg=hello": dial tcp 35.230.112.32:30781: connect: connection refused Nov 26 14:40:35.873: INFO: Poking "http://35.230.112.32:30781/echo?msg=hello" Nov 26 14:40:35.912: INFO: Poke("http://35.230.112.32:30781/echo?msg=hello"): Get "http://35.230.112.32:30781/echo?msg=hello": dial tcp 35.230.112.32:30781: connect: connection refused Nov 26 14:40:37.872: INFO: Poking "http://35.230.112.32:30781/echo?msg=hello" Nov 26 14:40:37.912: INFO: Poke("http://35.230.112.32:30781/echo?msg=hello"): Get "http://35.230.112.32:30781/echo?msg=hello": dial tcp 35.230.112.32:30781: connect: connection refused Nov 26 14:40:39.872: INFO: Poking "http://35.230.112.32:30781/echo?msg=hello" Nov 26 14:40:39.912: INFO: Poke("http://35.230.112.32:30781/echo?msg=hello"): Get "http://35.230.112.32:30781/echo?msg=hello": dial tcp 35.230.112.32:30781: connect: connection refused Nov 26 14:40:41.873: INFO: Poking "http://35.230.112.32:30781/echo?msg=hello" Nov 26 14:40:41.954: INFO: Poke("http://35.230.112.32:30781/echo?msg=hello"): success STEP: creating a static load balancer IP 11/26/22 14:40:41.954 Nov 26 14:40:44.205: INFO: Allocated static load balancer IP: 34.168.200.123 STEP: changing the TCP service to type=LoadBalancer 11/26/22 14:40:44.205 STEP: waiting for the TCP service to have a load balancer 11/26/22 14:40:44.291 Nov 26 14:40:44.291: INFO: Waiting up to 15m0s for service "mutability-test" to have a LoadBalancer Nov 26 14:41:52.412: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:41:54.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:41:56.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:41:58.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:00.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:02.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:04.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:06.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:08.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:10.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:12.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:14.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:16.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:18.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:20.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:22.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:38.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:40.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:42.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:44.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:46.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:48.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:50.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:52.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:54.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:56.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:58.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:00.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:02.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:04.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:06.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:08.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:10.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:12.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:14.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:16.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:18.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:20.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:22.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:24.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:26.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:28.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:30.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:32.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:34.372: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:36.371: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 5m0.331s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 5m0s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 4m21.131s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc002c77218, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0xc002e37440?, 0xc004037b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000ab9950?, 0x7fa7740?, 0xc000098c00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002bca960, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002bca960, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 5m20.336s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 5m20.005s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 4m41.136s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc002c77218, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0xc002e37440?, 0xc004037b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000ab9950?, 0x7fa7740?, 0xc000098c00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002bca960, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002bca960, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 5m40.341s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 5m40.01s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 5m1.141s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc002c77218, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0xc002e37440?, 0xc004037b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000ab9950?, 0x7fa7740?, 0xc000098c00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002bca960, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002bca960, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 6m0.345s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 6m0.014s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 5m21.145s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc002c77218, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0xc002e37440?, 0xc004037b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000ab9950?, 0x7fa7740?, 0xc000098c00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002bca960, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002bca960, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 6m20.347s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 6m20.016s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 5m41.147s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc002c77218, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0xc002e37440?, 0xc004037b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000ab9950?, 0x7fa7740?, 0xc000098c00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002bca960, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002bca960, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:46:38.386: INFO: TCP load balancer: 34.168.200.123 STEP: demoting the static IP to ephemeral 11/26/22 14:46:38.386 STEP: hitting the TCP service's NodePort 11/26/22 14:46:40.02 Nov 26 14:46:40.021: INFO: Poking "http://35.230.112.32:30781/echo?msg=hello" Nov 26 14:46:40.105: INFO: Poke("http://35.230.112.32:30781/echo?msg=hello"): success STEP: hitting the TCP service's LoadBalancer 11/26/22 14:46:40.105 Nov 26 14:46:40.105: INFO: Poking "http://34.168.200.123:80/echo?msg=hello" Nov 26 14:46:40.146: INFO: Poke("http://34.168.200.123:80/echo?msg=hello"): Get "http://34.168.200.123:80/echo?msg=hello": dial tcp 34.168.200.123:80: connect: connection refused Nov 26 14:46:42.147: INFO: Poking "http://34.168.200.123:80/echo?msg=hello" Nov 26 14:46:42.187: INFO: Poke("http://34.168.200.123:80/echo?msg=hello"): Get "http://34.168.200.123:80/echo?msg=hello": dial tcp 34.168.200.123:80: connect: connection refused Nov 26 14:46:44.146: INFO: Poking "http://34.168.200.123:80/echo?msg=hello" Nov 26 14:46:44.186: INFO: Poke("http://34.168.200.123:80/echo?msg=hello"): Get "http://34.168.200.123:80/echo?msg=hello": dial tcp 34.168.200.123:80: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 6m40.349s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 6m40.018s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's LoadBalancer (Step Runtime: 5.335s) test/e2e/network/loadbalancer.go:192 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc003c3b6c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000110290?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000a9c9f0?, 0x76888de?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc001df5ef0, 0xe}, 0x50, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:193 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:46:46.147: INFO: Poking "http://34.168.200.123:80/echo?msg=hello" Nov 26 14:46:46.226: INFO: Poke("http://34.168.200.123:80/echo?msg=hello"): success STEP: changing the TCP service's NodePort 11/26/22 14:46:46.226 Nov 26 14:46:46.463: INFO: TCP node port: 30782 STEP: hitting the TCP service's new NodePort 11/26/22 14:46:46.463 Nov 26 14:46:46.463: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:46:46.504: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:46:48.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:46:48.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:46:50.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:46:50.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:46:52.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:46:52.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:46:54.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:46:54.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:46:56.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:46:56.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:46:58.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:46:58.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:00.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:00.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:02.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:02.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:04.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:04.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 7m0.351s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 7m0.02s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 18.979s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000fbc780, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0x2d?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000dfb170?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:47:06.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:06.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:08.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:08.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:10.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:10.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:12.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:12.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:14.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:14.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:16.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:16.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:18.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:18.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:20.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:20.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:22.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:22.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:24.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:24.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 7m20.354s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 7m20.023s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 38.982s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000fbc780, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0x2d?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000dfb170?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:47:26.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:26.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:28.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:28.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:30.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:30.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:32.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:32.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:34.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:34.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:36.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:36.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:38.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:38.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:40.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:40.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:42.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:42.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:44.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:44.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 7m40.356s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 7m40.025s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 58.984s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000fbc780, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0x2d?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000dfb170?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:47:46.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:46.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:48.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:48.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:50.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:50.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:52.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:52.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:54.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:54.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:56.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:56.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:47:58.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:47:58.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:00.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:00.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:02.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:02.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:04.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:04.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 8m0.358s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 8m0.027s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 1m18.986s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000fbc780, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0x2d?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000dfb170?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:48:06.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:06.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:08.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:08.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:10.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:10.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:12.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:12.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:14.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:14.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:16.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:16.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:18.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:18.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:20.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:20.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:22.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:22.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:24.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:24.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 8m20.36s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 8m20.029s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 1m38.988s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000fbc780, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0x2d?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000dfb170?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:48:26.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:26.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:28.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:28.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:30.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:30.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:32.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:32.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:34.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:34.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:36.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:36.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:38.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:38.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:40.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:40.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:42.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:42.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:44.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:44.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 8m40.363s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 8m40.032s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 1m58.991s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000fbc780, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0x2d?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000dfb170?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:48:46.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:46.545: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:48.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:48.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:50.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:50.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:52.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:52.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:54.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:54.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:56.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:56.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:48:58.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:48:58.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:00.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:00.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:02.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:02.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:04.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:04.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 9m0.365s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 9m0.034s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 2m18.993s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000fbc780, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0x2d?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000dfb170?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:49:06.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:06.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:08.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:08.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:10.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:10.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:12.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:12.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:14.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:14.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:16.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:16.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:18.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:18.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:20.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:20.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:22.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:22.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:24.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:24.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 9m20.368s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 9m20.037s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 2m38.995s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000fbc780, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0x2d?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000dfb170?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:49:26.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:26.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:28.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:28.545: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:30.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:30.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:32.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:32.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:34.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:34.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:36.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:36.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:38.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:38.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:40.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:40.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:42.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:42.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:44.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:44.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 9m40.37s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 9m40.039s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 2m58.998s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000fbc780, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0x2d?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000dfb170?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:49:46.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:46.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:48.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:48.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:50.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:50.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:52.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:52.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:54.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:54.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:56.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:56.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:49:58.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:49:58.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:00.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:00.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:02.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:02.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:04.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:04.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 10m0.373s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 10m0.042s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 3m19.001s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000fbc780, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0x2d?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000dfb170?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:50:06.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:06.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:08.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:08.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:10.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:10.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:12.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:12.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:14.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:14.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:16.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:16.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:18.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:18.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:20.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:20.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:22.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:22.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:24.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:24.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 10m20.375s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 10m20.044s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 3m39.003s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000fbc780, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0x2d?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000dfb170?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:50:26.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:26.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:28.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:28.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:30.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:30.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:32.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:32.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:34.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:34.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:36.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:36.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:38.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:38.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:40.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:40.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:42.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:42.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:44.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:44.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 10m40.377s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 10m40.046s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 3m59.005s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000fbc780, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0x2d?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000dfb170?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:50:46.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:46.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:48.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:48.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:50.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:50.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:52.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:52.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:54.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:54.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:56.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:56.545: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:50:58.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:50:58.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:00.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:00.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:02.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:02.545: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:04.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:04.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 11m0.379s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 11m0.048s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 4m19.007s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000fbc780, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0x2d?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000dfb170?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:51:06.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:06.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:08.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:08.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:10.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:10.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:12.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:12.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:14.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:14.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:16.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:16.543: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:18.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:18.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:20.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:20.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:22.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:22.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:24.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:24.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 11m20.381s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 11m20.05s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 4m39.009s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000fbc780, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0x2d?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000dfb170?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:51:26.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:26.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:28.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:28.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:30.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:30.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:32.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:32.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:34.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:34.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:36.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:36.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:38.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:38.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:40.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:40.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:42.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:42.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:44.504: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:44.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #20 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 11m40.383s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 11m40.052s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's new NodePort (Step Runtime: 4m59.011s) test/e2e/network/loadbalancer.go:210 Spec Goroutine goroutine 731 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00012e000}, 0xc000fbc780, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00012e000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00012e000}, 0x2d?, 0xc001aebc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc000dfb170?, 0x7688904?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001dadf70, 0xc001dadfb8}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:51:46.505: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:46.544: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:46.544: INFO: Poking "http://35.230.112.32:30782/echo?msg=hello" Nov 26 14:51:46.584: INFO: Poke("http://35.230.112.32:30782/echo?msg=hello"): Get "http://35.230.112.32:30782/echo?msg=hello": dial tcp 35.230.112.32:30782: connect: connection refused Nov 26 14:51:46.584: FAIL: Could not reach HTTP service through 35.230.112.32:30782 after 5m0s Full Stack Trace k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc002b82930, 0xd}, 0x783e, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:48 +0x265 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:211 +0x11ce [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 14:51:46.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 26 14:51:46.624: INFO: Output of kubectl describe svc: Nov 26 14:51:46.624: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-162 describe svc --namespace=loadbalancers-162' Nov 26 14:51:46.733: INFO: rc: 1 Nov 26 14:51:46.733: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:51:46.734 STEP: Collecting events from namespace "loadbalancers-162". 11/26/22 14:51:46.734 Nov 26 14:51:46.773: INFO: Unexpected error: failed to list events in namespace "loadbalancers-162": <*url.Error | 0xc00162b710>: { Op: "Get", URL: "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/events", Err: <*net.OpError | 0xc00151d040>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00162b6e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 118, 239], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0011d4980>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 14:51:46.773: FAIL: failed to list events in namespace "loadbalancers-162": Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-162/events": dial tcp 34.83.118.239:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00027e5c0, {0xc000cc32d8, 0x11}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc000248ea0}, {0xc000cc32d8, 0x11}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00027e650?, {0xc000cc32d8?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000c504b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000e037d0?, 0xc003edaf50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000e037d0?, 0x7fadfa0?}, {0xae73300?, 0xc003edaf80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-162" for this suite. 11/26/22 14:51:46.774 Nov 26 14:51:46.813: FAIL: Couldn't delete ns: "loadbalancers-162": Delete "https://34.83.118.239/api/v1/namespaces/loadbalancers-162": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/loadbalancers-162", Err:(*net.OpError)(0xc002213c20)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000c504b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000e03690?, 0xc0000cefb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000e03690?, 0x0?}, {0xae73300?, 0x5?, 0xc000cc2fc0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sUDP\sservice\s\[Slow\]$'
test/e2e/network/service.go:604 k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0051724d0, 0xd}, 0x50, 0x0?) test/e2e/network/service.go:604 +0x17b k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:393 +0xb65 There were additional failures detected after the initial failure: [FAILED] Nov 26 14:50:37.572: failed to list events in namespace "loadbalancers-4126": Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/events": dial tcp 34.83.118.239:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 14:50:37.612: Couldn't delete ns: "loadbalancers-4126": Delete "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/loadbalancers-4126", Err:(*net.OpError)(0xc004db1860)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:41:30.302 Nov 26 14:41:30.302: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 14:41:30.303 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 14:41:30.452 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 14:41:30.536 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to change the type and ports of a UDP service [Slow] test/e2e/network/loadbalancer.go:287 Nov 26 14:41:30.716: INFO: namespace for TCP test: loadbalancers-4126 STEP: creating a UDP service mutability-test with type=ClusterIP in namespace loadbalancers-4126 11/26/22 14:41:30.765 Nov 26 14:41:30.840: INFO: service port UDP: 80 STEP: creating a pod to be part of the UDP service mutability-test 11/26/22 14:41:30.84 Nov 26 14:41:30.904: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 14:41:30.957: INFO: Found all 1 pods Nov 26 14:41:30.957: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [mutability-test-fxk7p] Nov 26 14:41:30.957: INFO: Waiting up to 2m0s for pod "mutability-test-fxk7p" in namespace "loadbalancers-4126" to be "running and ready" Nov 26 14:41:31.007: INFO: Pod "mutability-test-fxk7p": Phase="Pending", Reason="", readiness=false. Elapsed: 49.570336ms Nov 26 14:41:31.007: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-fxk7p' on 'bootstrap-e2e-minion-group-5c8w' to be 'Running' but was 'Pending' Nov 26 14:41:33.060: INFO: Pod "mutability-test-fxk7p": Phase="Running", Reason="", readiness=true. Elapsed: 2.10228114s Nov 26 14:41:33.060: INFO: Pod "mutability-test-fxk7p" satisfied condition "running and ready" Nov 26 14:41:33.060: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [mutability-test-fxk7p] STEP: changing the UDP service to type=NodePort 11/26/22 14:41:33.06 Nov 26 14:41:33.200: INFO: UDP node port: 31405 STEP: hitting the UDP service's NodePort 11/26/22 14:41:33.2 Nov 26 14:41:33.201: INFO: Poking udp://35.230.112.32:31405 Nov 26 14:41:33.241: INFO: Poke("udp://35.230.112.32:31405"): read udp 10.60.165.175:43554->35.230.112.32:31405: read: connection refused Nov 26 14:41:35.242: INFO: Poking udp://35.230.112.32:31405 Nov 26 14:41:35.281: INFO: Poke("udp://35.230.112.32:31405"): read udp 10.60.165.175:33368->35.230.112.32:31405: read: connection refused Nov 26 14:41:37.241: INFO: Poking udp://35.230.112.32:31405 Nov 26 14:41:37.281: INFO: Poke("udp://35.230.112.32:31405"): read udp 10.60.165.175:47396->35.230.112.32:31405: read: connection refused Nov 26 14:41:39.242: INFO: Poking udp://35.230.112.32:31405 Nov 26 14:41:39.281: INFO: Poke("udp://35.230.112.32:31405"): read udp 10.60.165.175:38994->35.230.112.32:31405: read: connection refused Nov 26 14:41:41.242: INFO: Poking udp://35.230.112.32:31405 Nov 26 14:41:41.281: INFO: Poke("udp://35.230.112.32:31405"): read udp 10.60.165.175:52883->35.230.112.32:31405: read: connection refused Nov 26 14:41:43.242: INFO: Poking udp://35.230.112.32:31405 Nov 26 14:41:43.283: INFO: Poke("udp://35.230.112.32:31405"): success STEP: creating a static load balancer IP 11/26/22 14:41:43.283 Nov 26 14:41:45.319: INFO: Allocated static load balancer IP: 34.145.115.61 STEP: changing the UDP service to type=LoadBalancer 11/26/22 14:41:45.319 STEP: demoting the static IP to ephemeral 11/26/22 14:41:45.437 STEP: waiting for the UDP service to have a load balancer 11/26/22 14:41:47.184 Nov 26 14:41:47.184: INFO: Waiting up to 15m0s for service "mutability-test" to have a LoadBalancer Nov 26 14:41:53.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:41:55.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:41:57.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:41:59.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:01.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:03.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:05.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:07.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:09.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:11.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:13.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:15.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:17.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:19.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:21.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:42:23.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:37.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:39.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:41.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:43.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:45.281: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:47.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:49.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:51.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:53.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:55.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:57.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:43:59.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:01.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:03.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:05.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:07.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:09.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:11.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:13.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:15.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:17.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:19.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:21.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:23.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:25.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:27.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:29.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:31.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:33.276: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:35.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:37.275: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/services/mutability-test": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 5m0.369s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 4m43.487s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 1106 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0034fe270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc004feaa20?, 0xc0006bbbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc005012640?, 0x7fa7740?, 0xc0001ca600?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004baaaf0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004baaaf0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0037c3500}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 5m20.372s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 5m20.004s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 5m3.49s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 1106 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0034fe270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc004feaa20?, 0xc0006bbbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc005012640?, 0x7fa7740?, 0xc0001ca600?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004baaaf0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004baaaf0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0037c3500}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 5m40.374s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 5m40.006s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 5m23.492s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 1106 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0034fe270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc004feaa20?, 0xc0006bbbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc005012640?, 0x7fa7740?, 0xc0001ca600?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004baaaf0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004baaaf0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0037c3500}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 6m0.376s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 6m0.008s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 5m43.494s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 1106 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0034fe270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc004feaa20?, 0xc0006bbbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc005012640?, 0x7fa7740?, 0xc0001ca600?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004baaaf0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004baaaf0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0037c3500}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 6m20.378s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 6m20.01s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 6m3.496s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 1106 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0034fe270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc004feaa20?, 0xc0006bbbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc005012640?, 0x7fa7740?, 0xc0001ca600?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004baaaf0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004baaaf0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0037c3500}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 6m40.38s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 6m40.012s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 6m23.498s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 1106 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0034fe270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc004feaa20?, 0xc0006bbbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc005012640?, 0x7fa7740?, 0xc0001ca600?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004baaaf0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004baaaf0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0037c3500}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 7m0.382s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 7m0.014s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 6m43.5s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 1106 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0034fe270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc004feaa20?, 0xc0006bbbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc005012640?, 0x7fa7740?, 0xc0001ca600?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc004baaaf0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc004baaaf0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0037c3500}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:48:31.291: INFO: UDP load balancer: 34.127.89.252 STEP: hitting the UDP service's NodePort 11/26/22 14:48:31.291 Nov 26 14:48:31.291: INFO: Poking udp://35.230.112.32:31405 Nov 26 14:48:31.339: INFO: Poke("udp://35.230.112.32:31405"): success STEP: hitting the UDP service's LoadBalancer 11/26/22 14:48:31.339 Nov 26 14:48:31.339: INFO: Poking udp://34.127.89.252:80 Nov 26 14:48:34.339: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:60467->34.127.89.252:80: i/o timeout Nov 26 14:48:36.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:48:39.340: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:33573->34.127.89.252:80: i/o timeout Nov 26 14:48:40.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:48:43.341: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:38275->34.127.89.252:80: i/o timeout Nov 26 14:48:44.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:48:44.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:58413->34.127.89.252:80: read: connection refused Nov 26 14:48:46.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:48:46.379: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:34651->34.127.89.252:80: read: connection refused Nov 26 14:48:48.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:48:48.379: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:42691->34.127.89.252:80: read: connection refused Nov 26 14:48:50.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:48:50.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:40279->34.127.89.252:80: read: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 7m20.384s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 7m20.015s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 19.347s) test/e2e/network/loadbalancer.go:392 Spec Goroutine goroutine 1106 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0034fe480, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0001ca600?, 0xc000e6fcb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0012427c8?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0051724d0, 0xd}, 0x50, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:393 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0037c3500}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:48:52.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:48:52.379: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:38150->34.127.89.252:80: read: connection refused Nov 26 14:48:54.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:48:57.341: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:59526->34.127.89.252:80: i/o timeout Nov 26 14:48:58.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:01.341: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:60191->34.127.89.252:80: i/o timeout Nov 26 14:49:02.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:05.341: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:33350->34.127.89.252:80: i/o timeout Nov 26 14:49:06.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:09.341: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:54652->34.127.89.252:80: i/o timeout Nov 26 14:49:10.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:10.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:38857->34.127.89.252:80: read: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 7m40.385s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 7m40.017s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 39.348s) test/e2e/network/loadbalancer.go:392 Spec Goroutine goroutine 1106 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0034fe480, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0001ca600?, 0xc000e6fcb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0012427c8?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0051724d0, 0xd}, 0x50, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:393 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0037c3500}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:49:12.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:12.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:45718->34.127.89.252:80: read: connection refused Nov 26 14:49:14.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:14.381: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:54207->34.127.89.252:80: read: connection refused Nov 26 14:49:16.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:19.341: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:53104->34.127.89.252:80: i/o timeout Nov 26 14:49:20.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:20.381: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:43513->34.127.89.252:80: read: connection refused Nov 26 14:49:22.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:22.379: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:33592->34.127.89.252:80: read: connection refused Nov 26 14:49:24.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:27.341: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:48531->34.127.89.252:80: i/o timeout Nov 26 14:49:28.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:28.381: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:56457->34.127.89.252:80: read: connection refused Nov 26 14:49:30.340: INFO: Poking udp://34.127.89.252:80 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 8m0.387s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 8m0.019s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 59.35s) test/e2e/network/loadbalancer.go:392 Spec Goroutine goroutine 1106 [IO wait] internal/poll.runtime_pollWait(0x7fcc14187bc0, 0x72) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc0016b1700?, 0xc00501d480?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitRead(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc0016b1700, {0xc00501d480, 0x6, 0x6}) /usr/local/go/src/internal/poll/fd_unix.go:167 net.(*netFD).Read(0xc0016b1700, {0xc00501d480?, 0xc000e6f818?, 0x2671252?}) /usr/local/go/src/net/fd_posix.go:55 net.(*conn).Read(0xc004ff0368, {0xc00501d480?, 0xae40400?, 0xae40400?}) /usr/local/go/src/net/net.go:183 > k8s.io/kubernetes/test/e2e/network.pokeUDP({0xc0051724d0, 0xd}, 0x50, {0x75ca3e8, 0xa}, 0xc000e6fa70) test/e2e/network/service.go:562 > k8s.io/kubernetes/test/e2e/network.testReachableUDP.func1() test/e2e/network/service.go:593 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0x7fadb00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0034fe480, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0001ca600?, 0xc000e6fcb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0012427c8?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0051724d0, 0xd}, 0x50, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:393 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0037c3500}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:49:33.341: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:53409->34.127.89.252:80: i/o timeout Nov 26 14:49:34.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:34.381: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:32782->34.127.89.252:80: read: connection refused Nov 26 14:49:36.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:36.379: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:60242->34.127.89.252:80: read: connection refused Nov 26 14:49:38.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:38.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:38359->34.127.89.252:80: read: connection refused Nov 26 14:49:40.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:40.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:50822->34.127.89.252:80: read: connection refused Nov 26 14:49:42.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:42.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:58600->34.127.89.252:80: read: connection refused Nov 26 14:49:44.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:44.379: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:56927->34.127.89.252:80: read: connection refused Nov 26 14:49:46.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:46.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:47523->34.127.89.252:80: read: connection refused Nov 26 14:49:48.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:48.379: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:59130->34.127.89.252:80: read: connection refused Nov 26 14:49:50.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:50.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:38953->34.127.89.252:80: read: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 8m20.389s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 8m20.021s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 1m19.352s) test/e2e/network/loadbalancer.go:392 Spec Goroutine goroutine 1106 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0034fe480, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0001ca600?, 0xc000e6fcb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0012427c8?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0051724d0, 0xd}, 0x50, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:393 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0037c3500}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:49:52.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:52.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:48167->34.127.89.252:80: read: connection refused Nov 26 14:49:54.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:54.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:53257->34.127.89.252:80: read: connection refused Nov 26 14:49:56.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:56.379: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:50890->34.127.89.252:80: read: connection refused Nov 26 14:49:58.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:49:58.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:33472->34.127.89.252:80: read: connection refused Nov 26 14:50:00.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:50:00.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:36176->34.127.89.252:80: read: connection refused Nov 26 14:50:02.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:50:02.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:35218->34.127.89.252:80: read: connection refused Nov 26 14:50:04.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:50:07.340: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:34567->34.127.89.252:80: i/o timeout Nov 26 14:50:08.340: INFO: Poking udp://34.127.89.252:80 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 8m40.391s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 8m40.022s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 1m39.354s) test/e2e/network/loadbalancer.go:392 Spec Goroutine goroutine 1106 [IO wait] internal/poll.runtime_pollWait(0x7fcc14188430, 0x72) /usr/local/go/src/runtime/netpoll.go:305 internal/poll.(*pollDesc).wait(0xc002132680?, 0xc005170470?, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 internal/poll.(*pollDesc).waitRead(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc002132680, {0xc005170470, 0x6, 0x6}) /usr/local/go/src/internal/poll/fd_unix.go:167 net.(*netFD).Read(0xc002132680, {0xc005170470?, 0xc000e6f818?, 0x2671252?}) /usr/local/go/src/net/fd_posix.go:55 net.(*conn).Read(0xc00529a100, {0xc005170470?, 0xae40400?, 0xae40400?}) /usr/local/go/src/net/net.go:183 > k8s.io/kubernetes/test/e2e/network.pokeUDP({0xc0051724d0, 0xd}, 0x50, {0x75ca3e8, 0xa}, 0xc000e6fa70) test/e2e/network/service.go:562 > k8s.io/kubernetes/test/e2e/network.testReachableUDP.func1() test/e2e/network/service.go:593 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0x7fadb00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0034fe480, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0001ca600?, 0xc000e6fcb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0012427c8?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0051724d0, 0xd}, 0x50, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:393 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0037c3500}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:50:11.340: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:33444->34.127.89.252:80: i/o timeout Nov 26 14:50:12.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:50:12.381: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:52889->34.127.89.252:80: read: connection refused Nov 26 14:50:14.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:50:14.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:54511->34.127.89.252:80: read: connection refused Nov 26 14:50:16.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:50:19.341: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:44447->34.127.89.252:80: i/o timeout Nov 26 14:50:20.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:50:20.381: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:57424->34.127.89.252:80: read: connection refused Nov 26 14:50:22.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:50:25.342: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:58560->34.127.89.252:80: i/o timeout Nov 26 14:50:26.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:50:26.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:41470->34.127.89.252:80: read: connection refused Nov 26 14:50:28.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:50:28.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:48890->34.127.89.252:80: read: connection refused Nov 26 14:50:30.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:50:30.379: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:51947->34.127.89.252:80: read: connection refused ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 9m0.393s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 9m0.024s) test/e2e/network/loadbalancer.go:287 At [By Step] hitting the UDP service's LoadBalancer (Step Runtime: 1m59.356s) test/e2e/network/loadbalancer.go:392 Spec Goroutine goroutine 1106 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0034fe480, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x60?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0001ca600?, 0xc000e6fcb0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x65cbc00?, 0xc0012427c8?, 0x754e980?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0051724d0, 0xd}, 0x50, 0x0?) test/e2e/network/service.go:603 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:393 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0037c3500}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:50:32.341: INFO: Poking udp://34.127.89.252:80 Nov 26 14:50:32.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:33504->34.127.89.252:80: read: connection refused Nov 26 14:50:34.340: INFO: Poking udp://34.127.89.252:80 Nov 26 14:50:34.380: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:38849->34.127.89.252:80: read: connection refused Nov 26 14:50:34.380: INFO: Poking udp://34.127.89.252:80 Nov 26 14:50:37.381: INFO: Poke("udp://34.127.89.252:80"): read udp 10.60.165.175:53005->34.127.89.252:80: i/o timeout Nov 26 14:50:37.381: FAIL: Could not reach UDP service through 34.127.89.252:80 after 2m0s: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.testReachableUDP({0xc0051724d0, 0xd}, 0x50, 0x0?) test/e2e/network/service.go:604 +0x17b k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:393 +0xb65 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 14:50:37.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 26 14:50:37.421: INFO: Output of kubectl describe svc: Nov 26 14:50:37.421: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-4126 describe svc --namespace=loadbalancers-4126' Nov 26 14:50:37.532: INFO: rc: 1 Nov 26 14:50:37.532: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:50:37.532 STEP: Collecting events from namespace "loadbalancers-4126". 11/26/22 14:50:37.532 Nov 26 14:50:37.572: INFO: Unexpected error: failed to list events in namespace "loadbalancers-4126": <*url.Error | 0xc0052a0f60>: { Op: "Get", URL: "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/events", Err: <*net.OpError | 0xc00545e320>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0051bc990>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 118, 239], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00543ae20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 14:50:37.572: FAIL: failed to list events in namespace "loadbalancers-4126": Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126/events": dial tcp 34.83.118.239:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0012cc5c0, {0xc004e84768, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0038e5040}, {0xc004e84768, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0012cc650?, {0xc004e84768?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0011ca4b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc004832de0?, 0xc000baff50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc004832de0?, 0x7fadfa0?}, {0xae73300?, 0xc000baff80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-4126" for this suite. 11/26/22 14:50:37.573 Nov 26 14:50:37.612: FAIL: Couldn't delete ns: "loadbalancers-4126": Delete "https://34.83.118.239/api/v1/namespaces/loadbalancers-4126": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/loadbalancers-4126", Err:(*net.OpError)(0xc004db1860)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0011ca4b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc004832d60?, 0xc0009e4fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc004832d60?, 0x0?}, {0xae73300?, 0x5?, 0xc004ffca98?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\screate\san\sinternal\stype\sload\sbalancer\s\[Slow\]$'
test/e2e/network/loadbalancer.go:606 k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:606 +0x2df There were additional failures detected after the initial failure: [FAILED] Nov 26 15:01:46.712: failed to list events in namespace "loadbalancers-5567": Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/events": dial tcp 34.83.118.239:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 15:01:46.752: Couldn't delete ns: "loadbalancers-5567": Delete "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/loadbalancers-5567", Err:(*net.OpError)(0xc001ae2050)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:44:08.616 Nov 26 14:44:08.616: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 14:44:08.617 Nov 26 14:44:08.656: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:10.696: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:12.696: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:14.697: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:16.696: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:18.697: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:20.697: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:22.696: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:24.697: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:26.696: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:28.697: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:30.696: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:32.696: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:34.696: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:44:36.696: INFO: Unexpected error while creating namespace: Post "https://34.83.118.239/api/v1/namespaces": dial tcp 34.83.118.239:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 14:46:37.454 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 14:46:37.593 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to create an internal type load balancer [Slow] test/e2e/network/loadbalancer.go:571 STEP: creating pod to be part of service lb-internal 11/26/22 14:46:37.896 Nov 26 14:46:37.968: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 14:46:38.038: INFO: Found 0/1 pods - will retry Nov 26 14:46:40.084: INFO: Found all 1 pods Nov 26 14:46:40.084: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [lb-internal-bqdsf] Nov 26 14:46:40.084: INFO: Waiting up to 2m0s for pod "lb-internal-bqdsf" in namespace "loadbalancers-5567" to be "running and ready" Nov 26 14:46:40.125: INFO: Pod "lb-internal-bqdsf": Phase="Pending", Reason="", readiness=false. Elapsed: 41.622503ms Nov 26 14:46:40.125: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-bqdsf' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:46:42.199: INFO: Pod "lb-internal-bqdsf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115412899s Nov 26 14:46:42.199: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-bqdsf' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:46:44.224: INFO: Pod "lb-internal-bqdsf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139879109s Nov 26 14:46:44.224: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-bqdsf' on 'bootstrap-e2e-minion-group-90df' to be 'Running' but was 'Pending' Nov 26 14:46:46.224: INFO: Pod "lb-internal-bqdsf": Phase="Running", Reason="", readiness=true. Elapsed: 6.14074966s Nov 26 14:46:46.225: INFO: Pod "lb-internal-bqdsf" satisfied condition "running and ready" Nov 26 14:46:46.225: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [lb-internal-bqdsf] STEP: creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled 11/26/22 14:46:46.225 Nov 26 14:46:46.373: INFO: Waiting up to 15m0s for service "lb-internal" to have a LoadBalancer Nov 26 14:48:34.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:36.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:38.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:40.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:42.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:44.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:46.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:48.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:50.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:52.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:54.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:56.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:48:58.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:00.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:02.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:04.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:06.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:08.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:10.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:12.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:14.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:16.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:18.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:20.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:22.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:24.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:26.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:28.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:30.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:32.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:34.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:36.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:38.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:40.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:42.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:44.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:46.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:48.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:50.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:52.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:54.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:56.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:49:58.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:00.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:02.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:04.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:06.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:08.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:10.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:12.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:14.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:16.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:18.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:20.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:22.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:24.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:26.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:28.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:36.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:38.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:40.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:42.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:44.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:46.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:48.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:50.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:52.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:54.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:56.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:50:58.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:00.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:02.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:04.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:06.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:08.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:10.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:12.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:14.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:16.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:18.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:20.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:22.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:24.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:26.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:28.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:30.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:32.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:34.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:36.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 7m29.186s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 5m0s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 4m51.577s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:51:38.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:40.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:42.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:44.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:46.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:48.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:50.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:52.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:54.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:51:56.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 7m49.187s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 5m20.002s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 5m11.578s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:51:58.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:00.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:02.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:04.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:06.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:08.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:10.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:12.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:14.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:16.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 8m9.19s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 5m40.005s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 5m31.581s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:52:18.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:20.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:22.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:24.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:26.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:28.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:30.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:32.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:34.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:36.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 8m29.191s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 6m0.006s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 5m51.582s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:52:38.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:40.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:42.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:44.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:46.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:48.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:50.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:52.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:54.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:52:56.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 8m49.194s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 6m20.008s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 6m11.585s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:52:58.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:00.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:02.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:04.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:06.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:08.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:10.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:12.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:14.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:16.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 9m9.196s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 6m40.011s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 6m31.587s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:53:18.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:20.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:22.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:24.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:26.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:28.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:30.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:32.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:34.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:36.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 9m29.198s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 7m0.013s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 6m51.589s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:53:38.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:40.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:42.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:44.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:46.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:48.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:50.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:52.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:54.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:53:56.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 9m49.2s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 7m20.014s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 7m11.591s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:53:58.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 10m9.201s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 7m40.016s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 7m31.592s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 10m29.203s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 8m0.018s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 7m51.594s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 10m49.205s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 8m20.02s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 8m11.596s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 11m9.207s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 8m40.022s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 8m31.598s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 11m29.209s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 9m0.024s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 8m51.6s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 11m49.213s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 9m20.028s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 9m11.604s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 12m9.215s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 9m40.03s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 9m31.606s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 12m29.217s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 10m0.031s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 9m51.608s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 12m49.218s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 10m20.033s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 10m11.609s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 13m9.221s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 10m40.036s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 10m31.612s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:57:36.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 13m29.223s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 11m0.038s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 10m51.614s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:57:38.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:40.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:42.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:44.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:46.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:48.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:50.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:52.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:54.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:57:56.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 13m49.225s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 11m20.039s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 11m11.616s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:57:58.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:00.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:02.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:04.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:06.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:08.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:10.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:12.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:14.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:16.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 14m9.227s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 11m40.042s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 11m31.618s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:58:18.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:20.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:22.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:24.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:26.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:28.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:30.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:32.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:34.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:36.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 14m29.229s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 12m0.043s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 11m51.62s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:58:38.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:40.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:42.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:44.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:46.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:48.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:50.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:52.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:54.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:58:56.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 14m49.231s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 12m20.046s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 12m11.622s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:58:58.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:00.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:02.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:04.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:06.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:08.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:10.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:12.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:14.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:16.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 15m9.233s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 12m40.048s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 12m31.624s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:59:18.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:20.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:22.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:24.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:26.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:28.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:30.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:32.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:34.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:36.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 15m29.236s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 13m0.051s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 12m51.627s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:59:38.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:40.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:42.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:44.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:46.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:48.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:50.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:52.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:54.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 14:59:56.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 15m49.238s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 13m20.053s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 13m11.629s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 14:59:58.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:00.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:02.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:04.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:06.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:08.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:10.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:12.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:14.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:16.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 16m9.24s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 13m40.055s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 13m31.631s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 15:00:18.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:20.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:22.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:24.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:26.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:28.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:30.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:32.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:34.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:36.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 16m29.242s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 14m0.057s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 13m51.633s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 15:00:38.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:40.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:42.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:44.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:46.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:48.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:50.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:52.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:54.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:00:56.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 16m49.244s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 14m20.059s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 14m11.635s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 15:00:58.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:00.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:02.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:04.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:06.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:08.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:10.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:12.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:14.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:16.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 17m9.247s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 14m40.062s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 14m31.638s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 15:01:18.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:20.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:22.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:24.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:26.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:28.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:30.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:32.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:34.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:36.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 17m29.248s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 15m0.063s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 14m51.639s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 1034 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004e03380, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004092d80?, 0xc000777b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc000204cc0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0040961e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0040961e0, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000ad0300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 15:01:38.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:40.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:42.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:44.483: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:46.484: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:46.523: INFO: Retrying .... error trying to get Service lb-internal: Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/services/lb-internal": dial tcp 34.83.118.239:443: connect: connection refused Nov 26 15:01:46.523: INFO: Unexpected error: <*fmt.wrapError | 0xc00408fe60>: { msg: "timed out waiting for service \"lb-internal\" to have a load balancer: timed out waiting for the condition", err: <*errors.errorString | 0xc000209ce0>{ s: "timed out waiting for the condition", }, } Nov 26 15:01:46.523: FAIL: timed out waiting for service "lb-internal" to have a load balancer: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:606 +0x2df STEP: Clean up loadbalancer service 11/26/22 15:01:46.523 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 15:01:46.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 26 15:01:46.563: INFO: Output of kubectl describe svc: Nov 26 15:01:46.563: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-5567 describe svc --namespace=loadbalancers-5567' Nov 26 15:01:46.671: INFO: rc: 1 Nov 26 15:01:46.671: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 15:01:46.672 STEP: Collecting events from namespace "loadbalancers-5567". 11/26/22 15:01:46.672 Nov 26 15:01:46.711: INFO: Unexpected error: failed to list events in namespace "loadbalancers-5567": <*url.Error | 0xc0050e0bd0>: { Op: "Get", URL: "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/events", Err: <*net.OpError | 0xc004097cc0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003652540>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 118, 239], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0015bf200>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 15:01:46.712: FAIL: failed to list events in namespace "loadbalancers-5567": Get "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567/events": dial tcp 34.83.118.239:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00158a5c0, {0xc000157ba8, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc004077380}, {0xc000157ba8, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00158a650?, {0xc000157ba8?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0010b84b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0048e0f50?, 0xc000e5af50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc000e5af40?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0048e0f50?, 0x2622c40?}, {0xae73300?, 0xc000e5af80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-5567" for this suite. 11/26/22 15:01:46.712 Nov 26 15:01:46.752: FAIL: Couldn't delete ns: "loadbalancers-5567": Delete "https://34.83.118.239/api/v1/namespaces/loadbalancers-5567": dial tcp 34.83.118.239:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.118.239/api/v1/namespaces/loadbalancers-5567", Err:(*net.OpError)(0xc001ae2050)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0010b84b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0048e0ed0?, 0xc002617fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0048e0ed0?, 0x0?}, {0xae73300?, 0x5?, 0xc002237a58?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\sswitch\ssession\saffinity\sfor\sLoadBalancer\sservice\swith\sESIPP\soff\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/network/service.go:3978 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x760dada?, {0x801de88, 0xc0023c2b60}, 0xc00286c000, 0x1) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithTransition(...) test/e2e/network/service.go:3962 k8s.io/kubernetes/test/e2e/network.glob..func19.11() test/e2e/network/loadbalancer.go:809 +0xf3from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:43:15.146 Nov 26 14:43:15.146: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 14:43:15.148 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 14:43:15.285 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 14:43:15.367 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [LinuxOnly] test/e2e/network/loadbalancer.go:802 STEP: creating service in namespace loadbalancers-9507 11/26/22 14:43:15.497 STEP: creating service affinity-lb-transition in namespace loadbalancers-9507 11/26/22 14:43:15.497 STEP: creating replication controller affinity-lb-transition in namespace loadbalancers-9507 11/26/22 14:43:15.551 I1126 14:43:15.597539 8201 runners.go:193] Created replication controller with name: affinity-lb-transition, namespace: loadbalancers-9507, replica count: 3 I1126 14:43:18.649447 8201 runners.go:193] affinity-lb-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 14:43:21.650400 8201 runners.go:193] affinity-lb-transition Pods: 3 out of 3 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1126 14:43:24.651540 8201 runners.go:193] affinity-lb-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 14:43:24.651559 8201 runners.go:193] Logging node info for node bootstrap-e2e-minion-group-r2mh I1126 14:43:24.710150 8201 runners.go:193] Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-r2mh 1a723982-de14-44dd-ba83-f2a219df5b69 4287 0 2022-11-26 14:37:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-r2mh kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-r2mh topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2338":"bootstrap-e2e-minion-group-r2mh","csi-hostpath-multivolume-3585":"bootstrap-e2e-minion-group-r2mh","csi-hostpath-provisioning-9596":"bootstrap-e2e-minion-group-r2mh"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 14:37:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 14:40:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 14:42:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 14:43:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-09/us-west1-b/bootstrap-e2e-minion-group-r2mh,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:20 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 14:37:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:33 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 14:40:52 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 14:40:52 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 14:40:52 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 14:40:52 +0000 UTC,LastTransitionTime:2022-11-26 14:37:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.230.108.57,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-r2mh.c.k8s-boskos-gce-project-09.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-r2mh.c.k8s-boskos-gce-project-09.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b02b78c4c4f55f886d1255b57a8f72a9,SystemUUID:b02b78c4-c4f5-5f88-6d12-55b57a8f72a9,BootID:89f45761-3bf9-44b7-ab35-4ef95f8fa75c,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-3585^2695f7b3-6d98-11ed-b37c-ae387406f163 kubernetes.io/csi/csi-hostpath-provisioning-9596^26926834-6d98-11ed-bd96-62e2a582563d],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-3585^2695f7b3-6d98-11ed-b37c-ae387406f163,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9596^26926834-6d98-11ed-bd96-62e2a582563d,DevicePath:,},},Config:nil,},} I1126 14:43:24.710721 8201 runners.go:193] Logging kubelet events for node bootstrap-e2e-minion-group-r2mh I1126 14:43:24.849015 8201 runners.go:193] Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-r2mh I1126 14:43:25.191875 8201 runners.go:193] metadata-proxy-v0.1-66r9l started at 2022-11-26 14:37:18 +0000 UTC (0+2 container statuses recorded) I1126 14:43:25.191903 8201 runners.go:193] Container metadata-proxy ready: true, restart count 0 I1126 14:43:25.191908 8201 runners.go:193] Container prometheus-to-sd-exporter ready: true, restart count 0 I1126 14:43:25.191912 8201 runners.go:193] konnectivity-agent-tb7mp started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) I1126 14:43:25.191918 8201 runners.go:193] Container konnectivity-agent ready: false, restart count 2 I1126 14:43:25.191923 8201 runners.go:193] pod-configmaps-083a45f1-1cc7-4319-bec1-83b30373c023 started at 2022-11-26 14:39:17 +0000 UTC (0+1 container statuses recorded) I1126 14:43:25.191928 8201 runners.go:193] Container agnhost-container ready: false, restart count 0 I1126 14:43:25.191932 8201 runners.go:193] csi-hostpathplugin-0 started at 2022-11-26 14:39:30 +0000 UTC (0+7 container statuses recorded) I1126 14:43:25.191937 8201 runners.go:193] Container csi-attacher ready: true, restart count 4 I1126 14:43:25.191941 8201 runners.go:193] Container csi-provisioner ready: true, restart count 4 I1126 14:43:25.191947 8201 runners.go:193] Container csi-resizer ready: true, restart count 4 I1126 14:43:25.191951 8201 runners.go:193] Container csi-snapshotter ready: true, restart count 4 I1126 14:43:25.191955 8201 runners.go:193] Container hostpath ready: true, restart count 4 I1126 14:43:25.191959 8201 runners.go:193] Container liveness-probe ready: true, restart count 4 I1126 14:43:25.191963 8201 runners.go:193] Container node-driver-registrar ready: true, restart count 4 I1126 14:43:25.191966 8201 runners.go:193] netserver-2 started at 2022-11-26 14:39:33 +0000 UTC (0+1 container statuses recorded) I1126 14:43:25.191972 8201 runners.go:193] Container webserver ready: true, restart count 0 I1126 14:43:25.191976 8201 runners.go:193] hostexec-bootstrap-e2e-minion-group-r2mh-jx5n7 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) I1126 14:43:25.191981 8201 runners.go:193] Container agnhost-container ready: true, restart count 0 I1126 14:43:25.191985 8201 runners.go:193] pod-subpath-test-dynamicpv-r9t6 started at 2022-11-26 14:39:37 +0000 UTC (1+1 container statuses recorded) I1126 14:43:25.191990 8201 runners.go:193] Init container init-volume-dynamicpv-r9t6 ready: false, restart count 0 I1126 14:43:25.191994 8201 runners.go:193] Container test-container-subpath-dynamicpv-r9t6 ready: false, restart count 0 I1126 14:43:25.191998 8201 runners.go:193] pod-9f945ae2-b2e9-4784-8ff8-108d273c77c3 started at 2022-11-26 14:39:38 +0000 UTC (0+1 container statuses recorded) I1126 14:43:25.192004 8201 runners.go:193] Container write-pod ready: false, restart count 0 I1126 14:43:25.192038 8201 runners.go:193] csi-hostpathplugin-0 started at 2022-11-26 14:40:10 +0000 UTC (0+7 container statuses recorded) I1126 14:43:25.192043 8201 runners.go:193] Container csi-attacher ready: true, restart count 2 I1126 14:43:25.192047 8201 runners.go:193] Container csi-provisioner ready: true, restart count 2 I1126 14:43:25.192051 8201 runners.go:193] Container csi-resizer ready: true, restart count 2 I1126 14:43:25.192055 8201 runners.go:193] Container csi-snapshotter ready: true, restart count 2 I1126 14:43:25.192058 8201 runners.go:193] Container hostpath ready: true, restart count 2 I1126 14:43:25.192062 8201 runners.go:193] Container liveness-probe ready: true, restart count 2 I1126 14:43:25.192066 8201 runners.go:193] Container node-driver-registrar ready: true, restart count 2 I1126 14:43:25.192070 8201 runners.go:193] affinity-lb-transition-n2tc5 started at 2022-11-26 14:43:15 +0000 UTC (0+1 container statuses recorded) I1126 14:43:25.192075 8201 runners.go:193] Container affinity-lb-transition ready: true, restart count 1 I1126 14:43:25.192079 8201 runners.go:193] csi-hostpathplugin-0 started at 2022-11-26 14:39:19 +0000 UTC (0+7 container statuses recorded) I1126 14:43:25.192085 8201 runners.go:193] Container csi-attacher ready: true, restart count 3 I1126 14:43:25.192088 8201 runners.go:193] Container csi-provisioner ready: true, restart count 3 I1126 14:43:25.192092 8201 runners.go:193] Container csi-resizer ready: true, restart count 3 I1126 14:43:25.192095 8201 runners.go:193] Container csi-snapshotter ready: true, restart count 3 I1126 14:43:25.192099 8201 runners.go:193] Container hostpath ready: true, restart count 3 I1126 14:43:25.192103 8201 runners.go:193] Container liveness-probe ready: true, restart count 3 I1126 14:43:25.192107 8201 runners.go:193] Container node-driver-registrar ready: true, restart count 3 I1126 14:43:25.192113 8201 runners.go:193] kube-proxy-bootstrap-e2e-minion-group-r2mh started at 2022-11-26 14:37:17 +0000 UTC (0+1 container statuses recorded) I1126 14:43:25.192118 8201 runners.go:193] Container kube-proxy ready: false, restart count 4 I1126 14:43:25.192122 8201 runners.go:193] coredns-6d97d5ddb-wmgqj started at 2022-11-26 14:37:40 +0000 UTC (0+1 container statuses recorded) I1126 14:43:25.192126 8201 runners.go:193] Container coredns ready: false, restart count 3 I1126 14:43:25.192130 8201 runners.go:193] csi-mockplugin-0 started at 2022-11-26 14:40:16 +0000 UTC (0+4 container statuses recorded) I1126 14:43:25.192135 8201 runners.go:193] Container busybox ready: true, restart count 3 I1126 14:43:25.192138 8201 runners.go:193] Container csi-provisioner ready: false, restart count 3 I1126 14:43:25.192142 8201 runners.go:193] Container driver-registrar ready: false, restart count 3 I1126 14:43:25.192146 8201 runners.go:193] Container mock ready: false, restart count 3 I1126 14:43:25.780083 8201 runners.go:193] Latency metrics for node bootstrap-e2e-minion-group-r2mh I1126 14:43:25.845205 8201 runners.go:193] Running kubectl logs on non-ready containers in loadbalancers-9507 Nov 26 14:43:25.932: INFO: Logs of loadbalancers-9507/affinity-lb-transition-n2tc5:affinity-lb-transition on node bootstrap-e2e-minion-group-r2mh Nov 26 14:43:25.932: INFO: : STARTLOG I1126 14:43:21.691353 1 log.go:198] Serving on port 9376. I1126 14:43:22.066510 1 log.go:198] Shutting down after receiving signal: terminated. I1126 14:43:22.066541 1 log.go:198] Awaiting pod deletion. ENDLOG for container loadbalancers-9507:affinity-lb-transition-n2tc5:affinity-lb-transition Nov 26 14:43:25.932: INFO: Unexpected error: failed to create replication controller with service in the namespace: loadbalancers-9507: <*errors.errorString | 0xc001dfff30>: { s: "1 containers failed which is more than allowed 0", } Nov 26 14:43:25.933: FAIL: failed to create replication controller with service in the namespace: loadbalancers-9507: 1 containers failed which is more than allowed 0 Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x760dada?, {0x801de88, 0xc0023c2b60}, 0xc00286c000, 0x1) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithTransition(...) test/e2e/network/service.go:3962 k8s.io/kubernetes/test/e2e/network.glob..func19.11() test/e2e/network/loadbalancer.go:809 +0xf3 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 14:43:25.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 26 14:43:25.983: INFO: Output of kubectl describe svc: Nov 26 14:43:25.983: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.118.239 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-9507 describe svc --namespace=loadbalancers-9507' Nov 26 14:43:26.461: INFO: stderr: "" Nov 26 14:43:26.461: INFO: stdout: "Name: affinity-lb-transition\nNamespace: loadbalancers-9507\nLabels: <none>\nAnnotations: <none>\nSelector: name=affinity-lb-transition\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.123.207\nIPs: 10.0.123.207\nPort: <unset> 80/TCP\nTargetPort: 9376/TCP\nNodePort: <unset> 30523/TCP\nEndpoints: 10.64.2.54:9376,10.64.3.90:9376\nSession Affinity: ClientIP\nExternal Traffic Policy: Cluster\nEvents: <none>\n" Nov 26 14:43:26.461: INFO: Name: affinity-lb-transition Namespace: loadbalancers-9507 Labels: <none> Annotations: <none> Selector: name=affinity-lb-transition Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.123.207 IPs: 10.0.123.207 Port: <unset> 80/TCP TargetPort: 9376/TCP NodePort: <unset> 30523/TCP Endpoints: 10.64.2.54:9376,10.64.3.90:9376 Session Affinity: ClientIP External Traffic Policy: Cluster Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 14:43:26.461 STEP: Collecting events from namespace "loadbalancers-9507". 11/26/22 14:43:26.461 STEP: Found 19 events. 11/26/22 14:43:26.514 Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:15 +0000 UTC - event for affinity-lb-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-transition-fvtxg Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:15 +0000 UTC - event for affinity-lb-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-transition-n2tc5 Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:15 +0000 UTC - event for affinity-lb-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-transition-tgknb Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:15 +0000 UTC - event for affinity-lb-transition-fvtxg: {default-scheduler } Scheduled: Successfully assigned loadbalancers-9507/affinity-lb-transition-fvtxg to bootstrap-e2e-minion-group-5c8w Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:15 +0000 UTC - event for affinity-lb-transition-n2tc5: {default-scheduler } Scheduled: Successfully assigned loadbalancers-9507/affinity-lb-transition-n2tc5 to bootstrap-e2e-minion-group-r2mh Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:15 +0000 UTC - event for affinity-lb-transition-tgknb: {default-scheduler } Scheduled: Successfully assigned loadbalancers-9507/affinity-lb-transition-tgknb to bootstrap-e2e-minion-group-90df Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:16 +0000 UTC - event for affinity-lb-transition-n2tc5: {kubelet bootstrap-e2e-minion-group-r2mh} Started: Started container affinity-lb-transition Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:16 +0000 UTC - event for affinity-lb-transition-n2tc5: {kubelet bootstrap-e2e-minion-group-r2mh} Created: Created container affinity-lb-transition Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:16 +0000 UTC - event for affinity-lb-transition-n2tc5: {kubelet bootstrap-e2e-minion-group-r2mh} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:18 +0000 UTC - event for affinity-lb-transition-fvtxg: {kubelet bootstrap-e2e-minion-group-5c8w} Created: Created container affinity-lb-transition Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:18 +0000 UTC - event for affinity-lb-transition-fvtxg: {kubelet bootstrap-e2e-minion-group-5c8w} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:18 +0000 UTC - event for affinity-lb-transition-n2tc5: {kubelet bootstrap-e2e-minion-group-r2mh} Killing: Stopping container affinity-lb-transition Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:18 +0000 UTC - event for affinity-lb-transition-tgknb: {kubelet bootstrap-e2e-minion-group-90df} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:18 +0000 UTC - event for affinity-lb-transition-tgknb: {kubelet bootstrap-e2e-minion-group-90df} Created: Created container affinity-lb-transition Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:18 +0000 UTC - event for affinity-lb-transition-tgknb: {kubelet bootstrap-e2e-minion-group-90df} Started: Started container affinity-lb-transition Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:19 +0000 UTC - event for affinity-lb-transition-fvtxg: {kubelet bootstrap-e2e-minion-group-5c8w} Started: Started container affinity-lb-transition Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:20 +0000 UTC - event for affinity-lb-transition-fvtxg: {kubelet bootstrap-e2e-minion-group-5c8w} Killing: Stopping container affinity-lb-transition Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:21 +0000 UTC - event for affinity-lb-transition-n2tc5: {kubelet bootstrap-e2e-minion-group-r2mh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 14:43:26.514: INFO: At 2022-11-26 14:43:25 +0000 UTC - event for affinity-lb-transition-n2tc5: {kubelet bootstrap-e2e-minion-group-r2mh} BackOff: Back-off restarting failed container affinity-lb-transition in pod affinity-lb-transition-n2tc5_loadbalancers-9507(ced86dfb-9e24-4888-be8a-f4c1523ec58e) Nov 26 14:43:26.576: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 14:43:26.576: INFO: affinity-lb-transition-fvtxg bootstrap-e2e-minion-group-5c8w Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:15 +0000 UTC }] Nov 26 14:43:26.576: INFO: affinity-lb-transition-n2tc5 bootstrap-e2e-minion-group-r2mh Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:25 +0000 UTC ContainersNotReady containers with unready status: [affinity-lb-transition]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:25 +0000 UTC ContainersNotReady containers with unready status: [affinity-lb-transition]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:15 +0000 UTC }] Nov 26 14:43:26.576: INFO: affinity-lb-transition-tgknb bootstrap-e2e-minion-group-90df Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 14:43:15 +0000 UTC }] Nov 26 14:43:26.576: INFO: Nov 26 14:43:27.184: INFO: Logging node info for node bootstrap-e2e-master Nov 26 14:43:27.286: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 39f76886-2c7b-440b-9ef2-f11a2bfefeb1 3886 0 2022-11-26 14:37:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 14:37:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 14:42:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-09/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 14:37:20 +0000 UTC,LastTransitionTime:2022-11-26 14:37:20 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 14:42:41 +0000 UTC,LastTransitionTime:2022-11-26 14:37:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 14:42:41 +0000 UTC,LastTransitionTime:2022-11-26 14:37:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 14:42:41 +0000 UTC,LastTransitionTime:2022-11-26 14:37:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 14:42:41 +0000 UTC,LastTransitionTime:2022-11-26 14:37:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.118.239,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-09.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-09.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e3e070cfc8d4db9e880daaf5c4a65019,SystemUUID:e3e070cf-c8d4-db9e-880d-aaf5c4a65019,BootID:ff2901fa-ae19-4286-a08b-110e9e385f96,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 14:43:27.286: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 14:43:27.395: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 14:43:27.662: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:27.662: INFO: Container kube-scheduler ready: true, restart count 2 Nov 26 14:43:27.662: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:27.662: INFO: Container konnectivity-server-container ready: true, restart count 2 Nov 26 14:43:27.662: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 14:36:47 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:27.662: INFO: Container l7-lb-controller ready: true, restart count 4 Nov 26 14:43:27.662: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:27.662: INFO: Container etcd-container ready: true, restart count 1 Nov 26 14:43:27.662: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 14:36:47 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:27.662: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 26 14:43:27.662: INFO: metadata-proxy-v0.1-b48tm started at 2022-11-26 14:37:16 +0000 UTC (0+2 container statuses recorded) Nov 26 14:43:27.662: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 14:43:27.662: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 14:43:27.662: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:27.662: INFO: Container kube-apiserver ready: true, restart count 1 Nov 26 14:43:27.662: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:27.662: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 26 14:43:27.662: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 14:36:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:27.662: INFO: Container etcd-container ready: true, restart count 1 Nov 26 14:43:27.933: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 14:43:27.933: INFO: Logging node info for node bootstrap-e2e-minion-group-5c8w Nov 26 14:43:28.025: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-5c8w 9c1c2738-39d8-4fbd-8eb9-cd823476dc17 4300 0 2022-11-26 14:37:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-5c8w kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-5c8w topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-6811":"bootstrap-e2e-minion-group-5c8w"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 14:37:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 14:40:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 14:42:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 14:43:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-09/us-west1-b/bootstrap-e2e-minion-group-5c8w,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:32 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:32 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 14:42:32 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 14:42:32 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 14:42:32 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 14:42:32 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:32 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 14:37:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:33 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 14:40:53 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 14:40:53 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 14:40:53 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 14:40:53 +0000 UTC,LastTransitionTime:2022-11-26 14:37:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.230.112.32,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-5c8w.c.k8s-boskos-gce-project-09.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-5c8w.c.k8s-boskos-gce-project-09.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f8110843d1932588ce84ecdf8f74c3c9,SystemUUID:f8110843-d193-2588-ce84-ecdf8f74c3c9,BootID:ebb46ff0-b8ec-405f-80a1-fbaa69879823,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-6811^3ec72831-6d98-11ed-92c7-5e18ea5c5386],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-6811^3ec72831-6d98-11ed-92c7-5e18ea5c5386,DevicePath:,},},Config:nil,},} Nov 26 14:43:28.025: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-5c8w Nov 26 14:43:28.124: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-5c8w Nov 26 14:43:28.317: INFO: kube-proxy-bootstrap-e2e-minion-group-5c8w started at 2022-11-26 14:37:18 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container kube-proxy ready: false, restart count 3 Nov 26 14:43:28.317: INFO: pod-secrets-5fdd18ad-0588-44cf-82a3-528f3248be63 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 14:43:28.317: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-h5pvn started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:43:28.317: INFO: test-hostpath-type-7x592 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container host-path-testing ready: true, restart count 0 Nov 26 14:43:28.317: INFO: metrics-server-v0.5.2-867b8754b9-vrm2k started at 2022-11-26 14:38:16 +0000 UTC (0+2 container statuses recorded) Nov 26 14:43:28.317: INFO: Container metrics-server ready: false, restart count 3 Nov 26 14:43:28.317: INFO: Container metrics-server-nanny ready: false, restart count 4 Nov 26 14:43:28.317: INFO: failure-3 started at 2022-11-26 14:39:52 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container failure-3 ready: true, restart count 2 Nov 26 14:43:28.317: INFO: pod-9167b845-e5a4-4f53-8d7b-8d6705e552fb started at 2022-11-26 14:40:18 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container write-pod ready: false, restart count 0 Nov 26 14:43:28.317: INFO: pod-7dbe6380-f5d0-4852-b8bf-7231eca57b67 started at 2022-11-26 14:40:23 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container write-pod ready: false, restart count 0 Nov 26 14:43:28.317: INFO: netserver-0 started at 2022-11-26 14:39:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container webserver ready: true, restart count 5 Nov 26 14:43:28.317: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-9s9ds started at 2022-11-26 14:39:43 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 14:43:28.317: INFO: pod-6be3caae-2380-4995-afed-16e4c49357fb started at 2022-11-26 14:39:54 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container write-pod ready: false, restart count 0 Nov 26 14:43:28.317: INFO: mutability-test-fxk7p started at 2022-11-26 14:41:30 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container netexec ready: true, restart count 0 Nov 26 14:43:28.317: INFO: metadata-proxy-v0.1-xb4cm started at 2022-11-26 14:37:18 +0000 UTC (0+2 container statuses recorded) Nov 26 14:43:28.317: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 14:43:28.317: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 14:43:28.317: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-wks5s started at 2022-11-26 14:40:17 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:43:28.317: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-46mpd started at 2022-11-26 14:43:13 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:43:28.317: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-jjgn4 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 14:43:28.317: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:40:09 +0000 UTC (0+7 container statuses recorded) Nov 26 14:43:28.317: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 14:43:28.317: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 14:43:28.317: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 14:43:28.317: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 14:43:28.317: INFO: Container hostpath ready: true, restart count 3 Nov 26 14:43:28.317: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 14:43:28.317: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 26 14:43:28.317: INFO: affinity-lb-transition-fvtxg started at 2022-11-26 14:43:15 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container affinity-lb-transition ready: false, restart count 0 Nov 26 14:43:28.317: INFO: konnectivity-agent-cnxt9 started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container konnectivity-agent ready: true, restart count 5 Nov 26 14:43:28.317: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-478nt started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:43:28.317: INFO: external-provisioner-lt7f5 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container nfs-provisioner ready: false, restart count 0 Nov 26 14:43:28.317: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:43:16 +0000 UTC (0+7 container statuses recorded) Nov 26 14:43:28.317: INFO: Container csi-attacher ready: false, restart count 0 Nov 26 14:43:28.317: INFO: Container csi-provisioner ready: false, restart count 0 Nov 26 14:43:28.317: INFO: Container csi-resizer ready: false, restart count 0 Nov 26 14:43:28.317: INFO: Container csi-snapshotter ready: false, restart count 0 Nov 26 14:43:28.317: INFO: Container hostpath ready: false, restart count 0 Nov 26 14:43:28.317: INFO: Container liveness-probe ready: false, restart count 0 Nov 26 14:43:28.317: INFO: Container node-driver-registrar ready: false, restart count 0 Nov 26 14:43:28.317: INFO: test-hostpath-type-kwqtc started at 2022-11-26 14:43:20 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 14:43:28.317: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-cswzg started at 2022-11-26 14:39:31 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container agnhost-container ready: false, restart count 4 Nov 26 14:43:28.317: INFO: hostexec-bootstrap-e2e-minion-group-5c8w-c556l started at 2022-11-26 14:40:05 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:28.317: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 14:43:28.721: INFO: Latency metrics for node bootstrap-e2e-minion-group-5c8w Nov 26 14:43:28.721: INFO: Logging node info for node bootstrap-e2e-minion-group-90df Nov 26 14:43:28.789: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-90df 47ba7cef-c7a9-42dc-a972-e2581f5476da 3871 0 2022-11-26 14:37:19 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-90df kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-90df topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1468":"csi-mock-csi-mock-volumes-1468"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 14:37:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 14:39:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 14:40:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-26 14:42:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-09/us-west1-b/bootstrap-e2e-minion-group-90df,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:34 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 14:42:34 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 14:42:34 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 14:42:34 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 14:42:34 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:34 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:34 +0000 UTC,LastTransitionTime:2022-11-26 14:37:22 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 14:37:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:33 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 14:40:24 +0000 UTC,LastTransitionTime:2022-11-26 14:37:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 14:40:24 +0000 UTC,LastTransitionTime:2022-11-26 14:37:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 14:40:24 +0000 UTC,LastTransitionTime:2022-11-26 14:37:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 14:40:24 +0000 UTC,LastTransitionTime:2022-11-26 14:37:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.184.32,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-90df.c.k8s-boskos-gce-project-09.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-90df.c.k8s-boskos-gce-project-09.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:83ac01160a0c57758b5edb61ebb59ab4,SystemUUID:83ac0116-0a0c-5775-8b5e-db61ebb59ab4,BootID:8fc388ac-8473-4c3d-8b39-12dca64dff04,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 14:43:28.790: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-90df Nov 26 14:43:28.891: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-90df Nov 26 14:43:29.121: INFO: external-provisioner-7gtz8 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 26 14:43:29.121: INFO: back-off-cap started at 2022-11-26 14:41:30 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container back-off-cap ready: false, restart count 4 Nov 26 14:43:29.121: INFO: failure-4 started at 2022-11-26 14:41:30 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container failure-4 ready: false, restart count 0 Nov 26 14:43:29.121: INFO: pod-057c12b8-fcaa-47f7-b71f-aa3400ae7e4d started at 2022-11-26 14:41:44 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container write-pod ready: true, restart count 0 Nov 26 14:43:29.121: INFO: hostpath-symlink-prep-provisioning-7611 started at 2022-11-26 14:41:44 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container init-volume-provisioning-7611 ready: false, restart count 0 Nov 26 14:43:29.121: INFO: external-local-update-92js7 started at 2022-11-26 14:41:45 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container netexec ready: true, restart count 2 Nov 26 14:43:29.121: INFO: volume-snapshot-controller-0 started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container volume-snapshot-controller ready: false, restart count 4 Nov 26 14:43:29.121: INFO: httpd started at 2022-11-26 14:39:18 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container httpd ready: true, restart count 1 Nov 26 14:43:29.121: INFO: execpod-drops5lkl started at 2022-11-26 14:43:22 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 14:43:29.121: INFO: httpd started at 2022-11-26 14:41:18 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container httpd ready: false, restart count 4 Nov 26 14:43:29.121: INFO: hostexec-bootstrap-e2e-minion-group-90df-ql9k6 started at 2022-11-26 14:41:32 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 14:43:29.121: INFO: test-hostpath-type-jcjmw started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container host-path-testing ready: true, restart count 0 Nov 26 14:43:29.121: INFO: metadata-proxy-v0.1-ghfq8 started at 2022-11-26 14:37:20 +0000 UTC (0+2 container statuses recorded) Nov 26 14:43:29.121: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 14:43:29.121: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 14:43:29.121: INFO: hostexec-bootstrap-e2e-minion-group-90df-zwwz7 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:43:29.121: INFO: csi-mockplugin-0 started at 2022-11-26 14:39:19 +0000 UTC (0+4 container statuses recorded) Nov 26 14:43:29.121: INFO: Container busybox ready: true, restart count 0 Nov 26 14:43:29.121: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 14:43:29.121: INFO: Container driver-registrar ready: true, restart count 0 Nov 26 14:43:29.121: INFO: Container mock ready: true, restart count 0 Nov 26 14:43:29.121: INFO: affinity-lb-transition-tgknb started at 2022-11-26 14:43:15 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container affinity-lb-transition ready: true, restart count 0 Nov 26 14:43:29.121: INFO: l7-default-backend-8549d69d99-s4b5m started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 14:43:29.121: INFO: konnectivity-agent-8rxr7 started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container konnectivity-agent ready: true, restart count 2 Nov 26 14:43:29.121: INFO: netserver-1 started at 2022-11-26 14:39:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container webserver ready: true, restart count 4 Nov 26 14:43:29.121: INFO: hostexec-bootstrap-e2e-minion-group-90df-42ljh started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:43:29.121: INFO: execpod-accepttbp6q started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:43:29.121: INFO: kube-proxy-bootstrap-e2e-minion-group-90df started at 2022-11-26 14:37:19 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container kube-proxy ready: false, restart count 3 Nov 26 14:43:29.121: INFO: pod-subpath-test-inlinevolume-7q7t started at 2022-11-26 14:41:11 +0000 UTC (1+2 container statuses recorded) Nov 26 14:43:29.121: INFO: Init container init-volume-inlinevolume-7q7t ready: true, restart count 0 Nov 26 14:43:29.121: INFO: Container test-container-subpath-inlinevolume-7q7t ready: true, restart count 4 Nov 26 14:43:29.121: INFO: Container test-container-volume-inlinevolume-7q7t ready: false, restart count 3 Nov 26 14:43:29.121: INFO: hostpath-io-client started at 2022-11-26 14:43:14 +0000 UTC (1+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Init container hostpath-io-init ready: true, restart count 0 Nov 26 14:43:29.121: INFO: Container hostpath-io-client ready: true, restart count 0 Nov 26 14:43:29.121: INFO: kube-dns-autoscaler-5f6455f985-g8dtn started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container autoscaler ready: true, restart count 4 Nov 26 14:43:29.121: INFO: coredns-6d97d5ddb-thsmq started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container coredns ready: false, restart count 4 Nov 26 14:43:29.121: INFO: external-provisioner-7gjgf started at 2022-11-26 14:41:31 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container nfs-provisioner ready: true, restart count 1 Nov 26 14:43:29.121: INFO: inclusterclient started at <nil> (0+0 container statuses recorded) Nov 26 14:43:29.121: INFO: mutability-test-pdxr6 started at 2022-11-26 14:40:05 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container netexec ready: true, restart count 2 Nov 26 14:43:29.121: INFO: pod-secrets-ca2e8eea-812f-46de-9d25-74cfeb71e013 started at 2022-11-26 14:41:16 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.121: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 14:43:29.534: INFO: Latency metrics for node bootstrap-e2e-minion-group-90df Nov 26 14:43:29.534: INFO: Logging node info for node bootstrap-e2e-minion-group-r2mh Nov 26 14:43:29.588: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-r2mh 1a723982-de14-44dd-ba83-f2a219df5b69 4287 0 2022-11-26 14:37:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-r2mh kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-r2mh topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2338":"bootstrap-e2e-minion-group-r2mh","csi-hostpath-multivolume-3585":"bootstrap-e2e-minion-group-r2mh","csi-hostpath-provisioning-9596":"bootstrap-e2e-minion-group-r2mh"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 14:37:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 14:37:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 14:40:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 14:42:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 14:43:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-09/us-west1-b/bootstrap-e2e-minion-group-r2mh,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:21 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 14:42:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:20 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 14:37:33 +0000 UTC,LastTransitionTime:2022-11-26 14:37:33 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 14:40:52 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 14:40:52 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 14:40:52 +0000 UTC,LastTransitionTime:2022-11-26 14:37:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 14:40:52 +0000 UTC,LastTransitionTime:2022-11-26 14:37:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.230.108.57,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-r2mh.c.k8s-boskos-gce-project-09.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-r2mh.c.k8s-boskos-gce-project-09.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b02b78c4c4f55f886d1255b57a8f72a9,SystemUUID:b02b78c4-c4f5-5f88-6d12-55b57a8f72a9,BootID:89f45761-3bf9-44b7-ab35-4ef95f8fa75c,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-3585^2695f7b3-6d98-11ed-b37c-ae387406f163 kubernetes.io/csi/csi-hostpath-provisioning-9596^26926834-6d98-11ed-bd96-62e2a582563d],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-3585^2695f7b3-6d98-11ed-b37c-ae387406f163,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9596^26926834-6d98-11ed-bd96-62e2a582563d,DevicePath:,},},Config:nil,},} Nov 26 14:43:29.589: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-r2mh Nov 26 14:43:29.643: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-r2mh Nov 26 14:43:29.755: INFO: hostexec-bootstrap-e2e-minion-group-r2mh-jx5n7 started at 2022-11-26 14:43:14 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.755: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 14:43:29.755: INFO: pod-subpath-test-dynamicpv-r9t6 started at 2022-11-26 14:39:37 +0000 UTC (1+1 container statuses recorded) Nov 26 14:43:29.755: INFO: Init container init-volume-dynamicpv-r9t6 ready: false, restart count 0 Nov 26 14:43:29.755: INFO: Container test-container-subpath-dynamicpv-r9t6 ready: false, restart count 0 Nov 26 14:43:29.755: INFO: pod-9f945ae2-b2e9-4784-8ff8-108d273c77c3 started at 2022-11-26 14:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.755: INFO: Container write-pod ready: false, restart count 0 Nov 26 14:43:29.755: INFO: affinity-lb-transition-n2tc5 started at 2022-11-26 14:43:15 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.755: INFO: Container affinity-lb-transition ready: false, restart count 1 Nov 26 14:43:29.755: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:39:19 +0000 UTC (0+7 container statuses recorded) Nov 26 14:43:29.755: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 14:43:29.755: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 14:43:29.755: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 14:43:29.755: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 14:43:29.755: INFO: Container hostpath ready: true, restart count 3 Nov 26 14:43:29.755: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 14:43:29.755: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 26 14:43:29.755: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:40:10 +0000 UTC (0+7 container statuses recorded) Nov 26 14:43:29.755: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 14:43:29.755: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 14:43:29.755: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 14:43:29.755: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 14:43:29.755: INFO: Container hostpath ready: true, restart count 2 Nov 26 14:43:29.755: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 14:43:29.755: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 26 14:43:29.755: INFO: coredns-6d97d5ddb-wmgqj started at 2022-11-26 14:37:40 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.755: INFO: Container coredns ready: false, restart count 3 Nov 26 14:43:29.755: INFO: csi-mockplugin-0 started at 2022-11-26 14:40:16 +0000 UTC (0+4 container statuses recorded) Nov 26 14:43:29.755: INFO: Container busybox ready: true, restart count 3 Nov 26 14:43:29.755: INFO: Container csi-provisioner ready: false, restart count 3 Nov 26 14:43:29.755: INFO: Container driver-registrar ready: false, restart count 3 Nov 26 14:43:29.755: INFO: Container mock ready: false, restart count 3 Nov 26 14:43:29.755: INFO: kube-proxy-bootstrap-e2e-minion-group-r2mh started at 2022-11-26 14:37:17 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.755: INFO: Container kube-proxy ready: false, restart count 4 Nov 26 14:43:29.755: INFO: konnectivity-agent-tb7mp started at 2022-11-26 14:37:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.755: INFO: Container konnectivity-agent ready: false, restart count 2 Nov 26 14:43:29.755: INFO: pod-configmaps-083a45f1-1cc7-4319-bec1-83b30373c023 started at 2022-11-26 14:39:17 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.755: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 14:43:29.755: INFO: csi-hostpathplugin-0 started at 2022-11-26 14:39:30 +0000 UTC (0+7 container statuses recorded) Nov 26 14:43:29.755: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 14:43:29.755: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 14:43:29.755: INFO: Container csi-resizer ready: true, restart count 4 Nov 26 14:43:29.755: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 26 14:43:29.755: INFO: Container hostpath ready: true, restart count 4 Nov 26 14:43:29.755: INFO: Container liveness-probe ready: true, restart count 4 Nov 26 14:43:29.755: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 14:43:29.755: INFO: metadata-proxy-v0.1-66r9l started at 2022-11-26 14:37:18 +0000 UTC (0+2 container statuses recorded) Nov 26 14:43:29.755: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 14:43:29.755: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 14:43:29.755: INFO: netserver-2 started at 2022-11-26 14:39:33 +0000 UTC (0+1 container statuses recorded) Nov 26 14:43:29.755: INFO: Container webserver ready: true, restart count 0 Nov 26 14:43:30.107: INFO: Latency metrics for node bootstrap-e2e-minion-group-r2mh [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-9507" for this suite. 11/26/22 14:43:30.107
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\sswitch\ssession\saffinity\sfor\sLoadBalancer\sservice\swith\sESIPP\son\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/framework/debug/dump.go:44 k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc001419da0, {0xc000632c30, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc000d54820}, {0xc000632c30, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0005c33d0?, {0xc000632c30?, 0x1?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:341 +0x82d k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000f104b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0005c3580?, 0xc0013acf08?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc000a7a930?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0005c3580?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 14:39:16.708 Nov 26 14:39:16.708: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 14:39:16.71 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 14:39:16.835 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 14:39:16.952 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [LinuxOnly] test/e2e/network/loadbalancer.go:780 STEP: creating service in namespace loadbalancers-8738 11/26/22 14:39:17.12 STEP: creating service affinity-lb-esipp-transition in namespace loadbalancers-8738 11/26/22 14:39:17.12 STEP: creating replication controller affinity-lb-esipp-transition in namespace loadbalancers-8738 11/26/22 14:39:17.258 I1126 14:39:17.337600 8199 runners.go:193] Created replication controller with name: affinity-lb-esipp-transition, namespace: loadbalancers-8738, replica count: 3 I1126 14:39:20.438687 8199 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 14:39:23.438918 8199 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 14:39:26.439942 8199 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 14:39:29.440046 8199 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 14:39:32.440477 8199 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 14:39:35.440694 8199 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: waiting for loadbalancer for service loadbalancers-8738/affinity-lb-esipp-transition 11/26/22 14:39:35.482 Nov 26 14:39:35.535: INFO: Waiting up to 15m0s for service "affinity-lb-esipp-transition" to have a LoadBalancer Nov 26 14:39:53.746: INFO: Poking "http://34.127.89.252:80" Nov 26 14:39:55.746: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:39:55.746: INFO: Poking "http://34.127.89.252:80" Nov 26 14:39:55.826: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:39:55.826: INFO: Poking "http://34.127.89.252:80" Nov 26 14:39:55.909: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:39:55.909: INFO: Poking "http://34.127.89.252:80" Nov 26 14:39:55.989: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:39:55.989: INFO: Poking "http://34.127.89.252:80" Nov 26 14:39:56.068: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:39:56.069: INFO: Poking "http://34.127.89.252:80" Nov 26 14:39:56.147: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:39:56.147: INFO: Poking "http://34.127.89.252:80" Nov 26 14:39:56.227: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:39:56.227: INFO: Poking "http://34.127.89.252:80" Nov 26 14:39:56.306: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:39:56.306: INFO: Poking "http://34.127.89.252:80" Nov 26 14:39:56.384: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:39:56.384: INFO: Poking "http://34.127.89.252:80" Nov 26 14:39:56.462: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:39:56.463: INFO: Poking "http://34.127.89.252:80" Nov 26 14:39:56.543: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:39:56.543: INFO: Poking "http://34.127.89.252:80" Nov 26 14:39:56.623: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:39:56.623: INFO: Poking "http://34.127.89.252:80" Nov 26 14:39:56.702: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:39:56.702: INFO: Poking "http://34.127.89.252:80" Nov 26 14:39:56.781: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:39:56.781: INFO: Poking "http://34.127.89.252:80" Nov 26 14:39:56.860: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:39:56.860: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:39:56.860: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:39:56.860: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:39:56.860: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:39:56.860: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:39:56.860: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:39:56.860: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:39:56.860: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:39:56.860: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:39:56.860: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:39:56.860: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:39:56.860: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:39:56.860: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:39:56.860: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:39:58.861: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:00.862: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:00.862: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:02.862: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:02.863: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:04.864: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:04.864: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:06.865: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:06.865: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:08.865: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:08.865: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:10.866: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:10.866: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:12.866: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:12.867: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:14.868: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:14.868: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:16.868: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:16.868: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:18.869: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:18.869: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:20.869: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:20.869: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:22.870: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:22.870: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:24.871: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:24.871: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:26.872: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:26.872: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:28.872: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:30.861: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:32.862: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:32.862: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:34.862: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:34.862: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:36.863: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:36.863: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:38.863: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:38.864: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:40.864: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:40.864: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:42.866: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:40:42.866: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:43.968: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:43.968: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:45.054: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:45.054: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:45.135: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:45.135: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:45.231: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:45.231: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:45.335: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:45.335: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:45.413: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:45.413: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:45.492: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:45.492: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:45.675: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:45.675: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:45.753: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:45.753: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:45.753: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:45.753: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:45.753: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:45.753: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:45.753: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:45.753: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:45.753: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:45.753: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:45.887: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:45.970: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:45.970: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:46.049: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:46.049: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:46.127: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:46.127: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:46.207: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:46.207: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:46.285: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:46.285: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:46.364: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:46.364: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:46.442: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:46.442: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:46.521: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:46.521: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:46.600: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:46.600: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:46.678: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:46.678: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:46.756: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:46.756: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:46.835: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:46.835: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:46.913: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:46.913: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:46.993: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:46.993: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:47.072: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:47.072: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:47.072: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:47.072: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:47.072: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:47.072: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:47.072: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:47.072: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:47.072: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:47.072: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:47.072: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:47.072: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:47.072: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:47.072: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:47.072: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:47.072: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:49.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:49.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:49.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:49.230: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:49.230: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:49.308: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:49.308: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:49.387: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:49.387: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:49.465: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:49.465: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:49.544: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:49.544: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:49.623: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:49.623: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:49.701: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:49.701: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:49.780: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:49.780: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:49.858: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:49.858: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:49.937: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:49.937: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:50.015: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:50.015: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:50.094: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:50.094: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:50.172: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:50.173: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:50.251: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:50.251: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:50.251: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:50.251: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:50.251: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:50.251: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:50.251: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:50.251: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:50.251: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:50.251: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:50.251: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:50.251: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:50.251: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:50.251: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:50.251: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:50.251: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:51.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:51.150: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:51.150: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:51.229: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:51.229: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:51.308: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:51.309: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:51.387: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:51.387: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:51.466: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:51.466: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:51.544: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:51.544: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:51.650: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:51.650: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:51.729: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:51.729: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:51.808: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:51.808: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:51.886: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:51.886: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:51.965: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:51.965: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:52.043: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:52.043: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:52.121: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:52.121: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:52.200: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:52.200: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:52.279: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:52.279: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:52.279: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:52.279: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:52.279: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:52.279: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:52.279: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:52.279: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:52.279: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:52.279: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:52.279: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:52.279: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:52.279: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:52.279: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:52.279: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:52.279: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:53.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:53.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:53.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:53.229: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:53.229: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:53.308: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:53.308: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:53.386: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:53.386: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:53.470: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:53.470: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:53.548: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:53.548: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:53.627: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:53.627: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:53.735: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:53.735: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:53.814: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:53.814: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:53.892: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:53.892: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:53.971: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:53.971: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:54.049: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:54.049: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:54.128: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:54.128: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:54.206: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:54.206: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:54.285: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:54.285: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:54.285: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:54.285: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:54.285: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:54.285: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:54.285: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:54.285: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:54.285: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:54.285: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:54.285: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:54.285: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:54.285: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:54.285: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:54.285: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:54.285: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:55.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:55.154: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:55.154: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:55.236: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:55.236: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:55.315: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:55.315: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:55.393: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:55.393: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:55.472: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:55.472: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:55.551: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:55.551: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:55.629: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:55.629: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:55.708: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:55.708: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:55.787: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:55.787: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:55.865: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:55.865: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:55.944: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:55.944: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:56.022: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:56.022: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:56.101: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:56.101: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:56.179: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:56.179: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:56.257: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:56.257: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:56.257: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:56.257: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:56.257: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:56.257: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:56.257: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:56.257: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:56.257: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:56.257: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:56.257: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:56.257: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:56.257: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:56.257: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:56.257: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:56.257: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:57.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:57.150: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:57.150: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:57.229: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:57.229: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:57.307: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:57.307: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:57.386: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:57.386: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:57.464: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:57.464: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:57.543: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:57.543: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:57.622: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:57.622: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:57.701: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:57.701: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:57.779: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:57.779: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:57.871: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:57.871: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:57.950: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:57.950: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:58.028: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:58.028: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:58.107: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:58.107: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:58.186: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:58.186: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:58.264: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:58.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:58.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:58.264: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:58.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:58.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:58.264: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:58.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:58.264: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:58.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:58.264: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:58.264: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:58.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:58.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:58.264: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:40:58.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:40:59.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:59.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:59.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:59.229: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:59.229: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:59.307: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:59.307: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:59.386: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:59.386: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:59.464: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:59.464: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:59.543: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:59.543: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:59.621: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:59.621: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:59.700: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:59.700: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:59.779: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:59.779: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:59.857: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:59.857: INFO: Poking "http://34.127.89.252:80" Nov 26 14:40:59.936: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:40:59.936: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:00.016: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:00.016: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:00.095: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:00.095: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:00.174: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:00.174: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:00.254: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:00.254: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:00.254: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:00.254: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:00.254: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:00.254: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:00.254: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:00.254: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:00.254: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:00.254: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:00.254: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:00.254: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:00.254: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:00.254: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:00.254: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:00.254: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:01.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:01.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:01.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:01.229: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:01.229: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:01.308: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:01.308: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:01.387: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:01.387: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:01.466: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:01.466: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:01.544: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:01.544: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:01.623: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:01.623: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:01.702: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:01.702: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:01.780: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:01.780: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:01.859: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:01.859: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:01.938: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:01.938: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:02.017: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:02.017: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:02.095: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:02.095: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:02.174: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:02.174: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:02.253: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:02.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:02.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:02.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:02.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:02.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:02.253: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:02.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:02.253: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:02.253: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:02.253: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:02.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:02.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:02.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:02.253: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:02.253: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:03.073: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:03.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:03.152: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:03.230: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:03.230: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:03.309: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:03.309: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:03.387: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:03.387: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:03.466: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:03.466: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:03.544: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:03.544: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:03.623: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:03.623: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:03.702: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:03.702: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:03.780: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:03.780: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:03.859: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:03.859: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:03.938: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:03.938: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:04.017: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:04.017: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:04.095: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:04.095: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:04.173: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:04.174: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:04.252: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:04.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:04.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:04.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:04.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:04.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:04.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:04.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:04.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:04.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:04.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:04.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:04.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:04.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:04.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:04.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:05.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:05.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:05.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:05.229: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:05.229: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:05.308: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:05.308: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:05.387: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:05.387: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:05.465: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:05.465: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:05.544: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:05.544: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:05.622: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:05.622: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:05.701: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:05.701: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:05.780: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:05.780: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:05.859: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:05.859: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:05.938: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:05.938: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:06.016: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:06.016: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:06.095: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:06.095: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:06.173: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:06.173: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:06.252: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:06.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:06.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:06.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:06.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:06.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:06.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:06.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:06.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:06.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:06.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:06.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:06.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:06.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:06.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:06.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:07.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:07.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:07.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:07.230: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:07.230: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:07.309: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:07.309: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:07.387: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:07.387: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:07.466: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:07.466: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:07.544: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:07.544: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:07.623: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:07.623: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:07.702: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:07.702: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:07.780: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:07.780: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:07.859: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:07.859: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:07.937: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:07.937: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:08.016: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:08.016: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:08.094: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:08.094: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:08.174: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:08.174: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:08.252: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:08.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:08.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:08.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:08.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:08.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:08.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:08.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:08.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:08.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:08.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:08.252: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:08.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:08.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:08.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:08.252: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:09.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:09.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:09.152: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:09.230: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:09.230: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:09.309: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:09.309: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:09.388: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:09.388: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:09.466: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:09.466: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:09.544: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:09.544: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:09.624: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:09.624: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:09.702: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:09.702: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:09.781: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:09.781: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:09.859: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:09.859: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:09.938: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:09.938: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:10.016: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:10.016: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:10.095: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:10.095: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:10.175: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:10.175: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:10.253: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:10.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:10.253: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:10.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:10.253: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:10.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:10.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:10.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:10.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:10.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:10.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:10.253: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:10.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:10.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:10.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:10.253: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:11.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:11.152: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:11.152: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:11.230: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:11.230: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:11.309: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:11.309: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:11.394: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:11.394: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:11.473: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:11.473: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:11.551: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:11.551: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:11.630: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:11.630: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:11.708: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:11.708: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:11.787: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:11.787: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:11.865: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:11.865: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:11.944: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:11.944: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:12.022: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:12.022: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:12.101: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:12.101: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:12.179: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:12.179: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:12.258: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:12.258: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:12.258: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:12.258: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:12.258: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:12.258: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:12.258: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:12.258: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:12.258: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:12.258: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:12.258: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:12.258: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:12.258: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:12.258: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:12.258: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:12.258: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:13.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:13.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:13.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:13.230: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:13.230: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:13.308: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:13.308: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:13.387: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:13.387: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:13.466: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:13.466: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:13.544: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:13.544: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:13.623: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:13.623: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:13.701: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:13.701: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:13.780: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:13.780: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:13.858: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:13.858: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:13.943: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:13.943: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:14.030: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:14.030: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:14.109: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:14.109: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:14.196: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:14.196: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:14.274: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:14.274: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:14.274: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:14.274: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:14.274: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:14.274: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:14.274: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:14.274: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:14.274: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:14.274: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:14.274: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:14.274: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:14.274: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:14.274: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:14.274: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:14.274: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:15.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:15.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:15.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:15.230: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:15.230: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:15.309: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:15.309: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:15.387: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:15.387: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:15.466: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:15.466: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:15.545: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:15.545: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:15.623: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:15.623: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:15.702: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:15.702: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:15.780: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:15.780: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:15.859: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:15.859: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:15.938: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:15.938: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:16.019: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:16.019: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:16.097: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:16.097: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:16.176: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:16.176: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:16.254: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:16.255: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:16.255: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:16.255: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:16.255: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:16.255: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:16.255: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:16.255: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:16.255: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:16.255: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:16.255: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:16.255: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:16.255: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:16.255: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:16.255: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:16.255: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:17.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:17.152: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:17.152: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:17.231: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:17.231: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:17.309: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:17.309: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:17.388: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:17.388: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:17.466: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:17.466: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:17.545: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:17.545: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:17.623: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:17.623: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:17.702: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:17.702: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:17.781: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:17.781: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:17.859: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:17.859: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:17.938: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:17.938: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:18.017: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:18.017: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:18.095: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:18.095: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:18.175: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:18.175: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:18.253: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:18.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:18.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:18.253: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:18.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:18.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:18.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:18.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:18.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:18.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:18.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:18.253: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:18.253: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:18.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:18.253: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:18.253: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:19.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:19.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:19.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:19.230: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:19.230: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:19.309: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:19.309: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:19.388: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:19.388: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:19.467: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:19.467: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:19.545: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:19.545: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:19.624: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:19.624: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:19.702: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:19.702: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:19.781: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:19.781: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:19.859: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:19.859: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:19.947: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:19.947: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:20.026: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:20.026: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:20.104: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:20.105: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:20.185: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:20.185: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:20.264: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:20.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:20.264: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:20.264: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:20.264: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:20.264: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:20.264: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:20.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:20.264: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:20.264: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:20.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:20.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:20.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:20.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:20.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:20.264: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:21.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:21.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:21.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:21.230: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:21.230: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:21.309: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:21.309: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:21.403: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:21.403: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:21.481: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:21.481: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:21.560: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:21.560: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:21.638: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:21.638: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:21.720: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:21.720: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:21.799: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:21.799: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:21.881: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:21.881: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:21.965: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:21.965: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:22.043: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:22.043: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:22.199: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:22.199: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:22.279: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:22.279: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:22.374: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:22.374: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:22.374: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:22.374: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:22.374: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:22.374: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:22.374: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:22.374: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:22.374: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:22.374: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:22.374: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:22.374: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:22.374: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:22.374: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:22.374: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:22.374: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:23.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:23.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:23.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:23.231: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:23.231: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:23.310: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:23.310: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:23.389: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:23.389: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:23.467: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:23.467: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:23.545: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:23.545: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:23.629: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:23.629: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:23.709: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:23.709: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:23.788: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:23.788: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:23.873: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:23.873: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:23.951: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:23.951: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:24.061: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:24.061: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:24.139: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:24.139: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:24.244: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:24.244: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:24.323: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:24.323: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:24.323: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:24.323: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:24.323: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:24.323: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:24.323: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:24.323: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:24.323: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:24.323: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:24.323: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:24.323: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:24.323: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:24.323: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:24.323: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:24.323: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:25.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:25.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:25.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:25.229: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:25.229: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:25.308: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:25.308: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:25.386: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:25.386: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:25.472: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:25.472: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:25.550: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:25.550: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:25.628: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:25.628: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:25.707: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:25.707: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:25.785: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:25.785: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:25.865: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:25.865: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:25.943: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:25.943: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:26.022: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:26.022: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:26.108: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:26.108: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:26.187: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:26.187: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:26.266: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:26.266: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:26.266: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:26.266: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:26.266: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:26.266: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:26.266: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:26.266: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:26.266: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:26.266: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:26.266: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:26.266: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:26.266: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:26.266: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:26.266: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:26.266: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:27.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:27.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:27.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:27.231: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:27.231: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:27.310: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:27.310: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:27.389: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:27.389: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:27.467: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:27.467: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:27.546: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:27.546: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:27.624: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:27.624: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:27.703: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:27.703: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:27.783: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:27.783: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:27.880: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:27.880: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:27.978: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:27.978: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:28.057: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:28.057: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:28.138: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:28.138: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:28.217: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:28.217: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:28.295: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:28.295: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:28.295: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:28.295: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:28.295: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:28.295: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:28.295: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:28.295: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:28.295: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:28.295: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:28.295: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:28.295: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:28.295: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:28.295: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:28.295: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:28.295: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:29.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:29.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:29.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:29.234: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:29.234: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:29.327: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:29.327: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:29.417: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:29.417: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:29.496: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:29.496: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:29.575: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:29.575: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:29.653: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:29.653: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:29.771: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:29.771: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:29.851: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:29.851: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:29.930: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:29.930: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:30.010: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:30.010: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:30.096: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:30.096: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:30.175: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:30.175: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:30.254: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:30.254: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:30.332: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:30.332: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:30.332: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:30.332: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:30.332: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:30.332: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:30.332: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:30.332: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:30.332: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:30.332: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:30.332: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:30.332: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:30.332: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:30.332: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:30.332: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:30.332: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:31.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:31.154: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:31.154: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:31.233: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:31.233: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:31.312: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:31.312: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:31.390: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:31.390: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:31.469: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:31.469: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:31.548: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:31.548: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:31.626: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:31.626: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:31.706: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:31.706: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:31.784: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:31.784: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:31.863: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:31.863: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:31.942: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:31.942: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:32.021: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:32.021: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:32.061: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": dial tcp 34.127.89.252:80: connect: connection refused Nov 26 14:41:32.061: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:32.142: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:32.142: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:32.222: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:32.222: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:32.222: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:32.222: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:32.222: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:32.222: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:32.222: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:32.222: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:32.222: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:32.222: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:32.222: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:32.222: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:32.222: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:32.222: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:32.222: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:33.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:35.073: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:41:35.073: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:35.152: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:35.152: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:37.153: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:41:37.153: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:37.232: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:37.232: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:37.310: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:37.310: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:37.389: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:37.389: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:37.468: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:37.468: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:39.468: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 14:41:39.468: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:39.546: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:39.546: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:41.547: INFO: Poke("http://34.127.89.252:80"): Get "http://34.127.89.252:80": dial tcp 34.127.89.252:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Nov 26 14:41:41.547: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:41.625: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:41.625: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:41.704: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:41.704: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:41.783: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:41.783: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:41.862: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:41.862: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:41.942: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:41.942: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:41.942: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:41.942: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:41.942: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:41.942: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:41.942: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:41.942: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:41.942: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:41.942: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:41.942: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:41.942: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:43.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:43.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:43.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:43.230: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:43.230: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:43.311: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:43.311: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:43.394: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:43.394: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:43.473: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:43.473: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:43.552: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:43.552: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:43.630: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:43.630: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:43.709: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:43.709: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:43.788: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:43.788: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:43.866: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:43.866: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:43.945: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:43.945: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:44.025: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:44.025: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:44.104: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:44.104: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:44.182: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:44.183: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:44.261: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:44.261: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:44.261: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:44.261: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:44.261: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:44.261: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:44.261: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:44.261: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:44.261: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:44.261: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:44.261: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:44.261: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:44.261: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:44.261: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:44.261: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:44.261: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:45.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:45.150: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:45.150: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:45.229: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:45.229: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:45.308: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:45.308: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:45.386: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:45.386: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:45.464: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:45.464: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:45.543: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:45.543: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:45.621: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:45.621: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:45.700: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:45.700: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:45.778: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:45.778: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:45.857: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:45.857: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:45.936: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:45.936: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:46.014: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:46.014: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:46.093: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:46.093: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:46.171: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:46.171: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:46.251: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:46.251: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:46.251: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:46.251: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:46.251: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:46.251: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:46.251: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:46.251: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:46.251: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:46.251: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:46.251: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:46.251: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:46.251: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:46.251: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:46.251: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:46.251: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:47.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:47.159: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:47.159: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:47.238: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:47.239: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:47.317: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:47.317: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:47.396: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:47.396: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:47.475: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:47.475: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:47.553: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:47.553: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:47.632: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:47.632: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:47.710: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:47.710: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:47.789: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:47.789: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:47.868: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:47.868: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:47.946: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:47.946: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:48.025: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:48.025: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:48.104: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:48.104: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:48.182: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:48.182: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:48.262: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:48.262: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:48.262: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:48.262: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:48.262: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:48.262: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:48.262: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:48.262: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:48.262: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:48.262: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:48.262: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:48.262: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:48.262: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:48.262: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:48.262: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:48.262: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:49.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:49.153: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:49.153: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:49.231: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:49.231: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:49.311: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:49.311: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:49.390: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:49.390: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:49.469: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:49.469: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:49.547: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:49.547: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:49.626: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:49.626: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:49.705: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:49.705: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:49.783: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:49.783: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:49.862: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:49.862: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:49.940: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:49.940: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:50.019: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:50.019: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:50.098: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:50.098: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:50.176: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:50.176: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:50.255: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:50.255: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:50.255: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:50.255: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:50.255: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:50.255: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:50.255: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:50.255: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:50.255: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:50.255: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:50.255: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:50.255: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:50.255: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:50.255: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:50.255: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:50.255: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:51.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:51.152: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:51.152: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:51.231: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:51.231: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:51.333: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:51.333: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:51.412: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:51.412: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:51.490: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:51.490: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:51.569: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:51.569: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:51.648: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:51.648: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:51.728: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:51.728: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:51.811: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:51.811: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:51.889: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:51.889: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:51.968: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:51.968: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:52.046: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:52.046: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:52.125: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:52.125: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:52.204: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:52.204: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:52.283: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:52.283: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:52.283: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:52.283: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:52.283: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:52.283: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:52.283: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:52.283: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:52.283: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:52.283: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:52.283: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:52.283: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:52.283: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:52.283: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:52.283: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:52.283: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:53.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:53.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:53.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:53.247: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:53.247: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:53.325: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:53.325: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:53.407: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:53.407: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:53.485: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:53.485: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:53.565: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:53.565: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:53.643: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:53.643: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:53.722: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:53.722: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:53.800: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:53.800: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:53.879: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:53.879: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:53.957: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:53.957: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:54.036: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:54.036: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:54.114: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:54.114: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:54.193: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:54.193: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:54.271: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:54.271: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:54.271: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:54.271: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:54.271: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:54.271: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:54.271: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:54.271: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:54.271: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:54.271: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:54.271: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:54.271: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:54.271: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:54.271: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:54.271: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:54.271: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:55.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:55.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:55.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:55.230: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:55.230: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:55.308: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:55.308: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:55.387: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:55.387: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:55.466: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:55.466: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:55.544: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:55.544: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:55.623: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:55.623: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:55.701: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:55.701: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:55.780: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:55.780: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:55.859: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:55.859: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:55.939: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:55.939: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:56.017: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:56.017: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:56.096: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:56.096: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:56.176: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:56.176: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:56.254: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:56.254: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:56.254: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:56.254: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:56.254: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:56.254: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:56.254: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:56.254: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:56.254: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:56.254: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:56.254: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:56.254: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:56.254: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:56.254: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:56.254: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:56.254: INFO: Received response from host: affinity-lb-esipp-transition-l5dkx Nov 26 14:41:57.072: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:57.151: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:57.151: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:57.230: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:57.230: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:57.308: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:57.308: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:57.387: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:57.387: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:57.466: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:57.466: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:57.544: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:57.544: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:57.623: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:57.623: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:57.701: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:57.701: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:57.780: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:57.780: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:57.859: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:57.859: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:57.953: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:57.953: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:58.039: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:58.039: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:58.118: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:58.118: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:58.197: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:58.197: INFO: Poking "http://34.127.89.252:80" Nov 26 14:41:58.275: INFO: Poke("http://34.127.89.252:80"): success Nov 26 14:41:58.275: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:58.275: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:58.275: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:58.275: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:58.275: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:58.275: INFO: Received response from host: affinity-lb-esipp-transition-l848s Nov 26 14:41:58.275: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:58.275: INFO: Received response from host: affinity-lb-esipp-transition-hdk6g Nov 26 14:41:58.275: INFO: Received response fr