go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sServers\swith\ssupport\sfor\sAPI\schunking\sshould\ssupport\scontinue\slisting\sfrom\sthe\slast\skey\sif\sthe\soriginal\sversion\shas\sbeen\scompacted\saway\,\sthough\sthe\slist\sis\sinconsistent\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0010c4780) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-api-machinery] Servers with support for API chunking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/27/22 00:15:42.758 Nov 27 00:15:42.758: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename chunking 11/27/22 00:15:42.76 Nov 27 00:15:42.799: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:44.839: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:46.838: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:48.839: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:50.839: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:52.839: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:54.839: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:56.839: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:58.839: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:00.839: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:02.839: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:04.839: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:06.839: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:08.839: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:10.839: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:12.840: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:12.879: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:12.879: INFO: Unexpected error: <*errors.errorString | 0xc000195d70>: { s: "timed out waiting for the condition", } Nov 27 00:16:12.879: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0010c4780) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/node/init/init.go:32 Nov 27 00:16:12.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/27/22 00:16:12.918 [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\sjobs\swhen\ssuspended\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0000fd860) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/27/22 00:15:14.2 Nov 27 00:15:14.200: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/27/22 00:15:14.201 Nov 27 00:15:14.240: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:16.280: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:18.280: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:20.281: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:22.280: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:24.280: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:26.281: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:28.280: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:30.280: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:32.280: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:34.280: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:36.280: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:38.281: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:40.281: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:42.280: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:44.281: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:44.320: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:44.320: INFO: Unexpected error: <*errors.errorString | 0xc000295d70>: { s: "timed out waiting for the condition", } Nov 27 00:15:44.321: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0000fd860) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 27 00:15:44.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/27/22 00:15:44.36 [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\snew\sjobs\swhen\sForbidConcurrent\s\[Slow\]\s\[Conformance\]$'
test/e2e/apps/cronjob.go:133 k8s.io/kubernetes/test/e2e/apps.glob..func2.3() test/e2e/apps/cronjob.go:133 +0x290 There were additional failures detected after the initial failure: [FAILED] Nov 26 23:55:49.088: failed to list events in namespace "cronjob-6165": Get "https://34.83.110.108/api/v1/namespaces/cronjob-6165/events": dial tcp 34.83.110.108:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 23:55:49.128: Couldn't delete ns: "cronjob-6165": Delete "https://34.83.110.108/api/v1/namespaces/cronjob-6165": dial tcp 34.83.110.108:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.110.108/api/v1/namespaces/cronjob-6165", Err:(*net.OpError)(0xc00191cbe0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 23:55:42.439 Nov 26 23:55:42.439: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/26/22 23:55:42.441 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 23:55:42.574 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 23:55:42.712 [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] test/e2e/apps/cronjob.go:124 STEP: Creating a ForbidConcurrent cronjob 11/26/22 23:55:42.831 STEP: Ensuring a job is scheduled 11/26/22 23:55:42.97 Nov 26 23:55:49.009: INFO: Unexpected error: Failed to schedule CronJob forbid: <*url.Error | 0xc0041e5620>: { Op: "Get", URL: "https://34.83.110.108/apis/batch/v1/namespaces/cronjob-6165/cronjobs/forbid", Err: <*net.OpError | 0xc00120b0e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001246510>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 110, 108], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003e46840>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 23:55:49.009: FAIL: Failed to schedule CronJob forbid: Get "https://34.83.110.108/apis/batch/v1/namespaces/cronjob-6165/cronjobs/forbid": dial tcp 34.83.110.108:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func2.3() test/e2e/apps/cronjob.go:133 +0x290 [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 26 23:55:49.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 23:55:49.049 STEP: Collecting events from namespace "cronjob-6165". 11/26/22 23:55:49.049 Nov 26 23:55:49.088: INFO: Unexpected error: failed to list events in namespace "cronjob-6165": <*url.Error | 0xc0041e5aa0>: { Op: "Get", URL: "https://34.83.110.108/api/v1/namespaces/cronjob-6165/events", Err: <*net.OpError | 0xc00120b360>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0014cc3c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 110, 108], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003e46bc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 23:55:49.088: FAIL: failed to list events in namespace "cronjob-6165": Get "https://34.83.110.108/api/v1/namespaces/cronjob-6165/events": dial tcp 34.83.110.108:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0038745c0, {0xc0041bff60, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc000cfd6c0}, {0xc0041bff60, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc003874650?, {0xc0041bff60?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00061d860) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0014a4ac0?, 0x13?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0014a4ac0?, 0x29449fc?}, {0xae73300?, 0xc0040d0f80?, 0xc0008ca618?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 STEP: Destroying namespace "cronjob-6165" for this suite. 11/26/22 23:55:49.089 Nov 26 23:55:49.128: FAIL: Couldn't delete ns: "cronjob-6165": Delete "https://34.83.110.108/api/v1/namespaces/cronjob-6165": dial tcp 34.83.110.108:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.110.108/api/v1/namespaces/cronjob-6165", Err:(*net.OpError)(0xc00191cbe0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00061d860) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0014a49d0?, 0x13?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0014a49d0?, 0xc000647f68?}, {0xae73300?, 0x801de88?, 0xc000cfd6c0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sBurst\sscaling\sshould\srun\sto\scompletion\seven\swith\sunhealthy\spods\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/statefulset/rest.go:69 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc0014e7ba0}, 0xc00226e000) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000e228e8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc00073be48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc0014e7ba0?, 0xc00073be88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc0014e7ba0}, 0x3, 0x3, 0xc00226e000) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 +0x3d0 There were additional failures detected after the initial failure: [FAILED] Nov 27 00:00:18.220: Get "https://34.83.110.108/apis/apps/v1/namespaces/statefulset-5891/statefulsets": dial tcp 34.83.110.108:443: connect: connection refused In [AfterEach] at: test/e2e/framework/statefulset/rest.go:76 ---------- [FAILED] Nov 27 00:00:18.299: failed to list events in namespace "statefulset-5891": Get "https://34.83.110.108/api/v1/namespaces/statefulset-5891/events": dial tcp 34.83.110.108:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 27 00:00:18.338: Couldn't delete ns: "statefulset-5891": Delete "https://34.83.110.108/api/v1/namespaces/statefulset-5891": dial tcp 34.83.110.108:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.110.108/api/v1/namespaces/statefulset-5891", Err:(*net.OpError)(0xc003a340a0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 23:57:55.747 Nov 26 23:57:55.747: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/26/22 23:57:55.749 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 23:57:55.88 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 23:57:55.975 [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:98 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:113 STEP: Creating service test in namespace statefulset-5891 11/26/22 23:57:56.055 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] test/e2e/apps/statefulset.go:697 STEP: Creating stateful set ss in namespace statefulset-5891 11/26/22 23:57:56.098 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5891 11/26/22 23:57:56.141 Nov 26 23:57:56.181: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 23:58:06.229: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 23:58:16.224: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 23:58:26.224: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 23:58:36.224: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 23:58:46.225: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 23:58:56.224: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod 11/26/22 23:58:56.224 Nov 26 23:58:56.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.110.108 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5891 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 26 23:58:56.809: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 26 23:58:56.809: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 26 23:58:56.809: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 26 23:58:56.851: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 26 23:59:06.893: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 26 23:59:06.893: INFO: Waiting for statefulset status.replicas updated to 0 Nov 26 23:59:07.063: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 23:59:07.063: INFO: ss-0 bootstrap-e2e-minion-group-2qlj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC }] Nov 26 23:59:07.063: INFO: Nov 26 23:59:07.063: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 26 23:59:08.107: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 23:59:08.107: INFO: ss-0 bootstrap-e2e-minion-group-2qlj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC }] Nov 26 23:59:08.107: INFO: Nov 26 23:59:08.107: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 26 23:59:09.187: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 23:59:09.188: INFO: ss-0 bootstrap-e2e-minion-group-2qlj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC }] Nov 26 23:59:09.188: INFO: Nov 26 23:59:09.188: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 26 23:59:10.230: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 23:59:10.230: INFO: ss-0 bootstrap-e2e-minion-group-2qlj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC }] Nov 26 23:59:10.230: INFO: Nov 26 23:59:10.230: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 26 23:59:11.272: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 23:59:11.272: INFO: ss-0 bootstrap-e2e-minion-group-2qlj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC }] Nov 26 23:59:11.272: INFO: Nov 26 23:59:11.272: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 26 23:59:12.316: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 23:59:12.317: INFO: ss-0 bootstrap-e2e-minion-group-2qlj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC }] Nov 26 23:59:12.317: INFO: Nov 26 23:59:12.317: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 26 23:59:13.359: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 23:59:13.359: INFO: ss-0 bootstrap-e2e-minion-group-2qlj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC }] Nov 26 23:59:13.359: INFO: Nov 26 23:59:13.359: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 26 23:59:14.401: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 23:59:14.401: INFO: ss-0 bootstrap-e2e-minion-group-2qlj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC }] Nov 26 23:59:14.401: INFO: Nov 26 23:59:14.401: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 26 23:59:15.444: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 23:59:15.444: INFO: ss-0 bootstrap-e2e-minion-group-2qlj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC }] Nov 26 23:59:15.444: INFO: Nov 26 23:59:15.444: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 26 23:59:16.487: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 23:59:16.487: INFO: ss-0 bootstrap-e2e-minion-group-2qlj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:58:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:56 +0000 UTC }] Nov 26 23:59:16.487: INFO: Nov 26 23:59:16.487: INFO: StatefulSet ss has not reached scale 3, at 1 STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5891 11/26/22 23:59:17.488 Nov 26 23:59:17.530: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.110.108 --kubeconfig=/workspace/.kube/config --namespace=statefulset-5891 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 23:59:18.058: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 26 23:59:18.058: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 26 23:59:18.058: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 26 23:59:18.100: INFO: Found 1 stateful pods, waiting for 3 Nov 26 23:59:28.143: INFO: Found 1 stateful pods, waiting for 3 Nov 26 23:59:38.143: INFO: Found 1 stateful pods, waiting for 3 Nov 26 23:59:48.146: INFO: Found 1 stateful pods, waiting for 3 Nov 26 23:59:58.142: INFO: Found 1 stateful pods, waiting for 3 Nov 27 00:00:08.147: INFO: Found 1 stateful pods, waiting for 3 Nov 27 00:00:18.140: INFO: Unexpected error: <*url.Error | 0xc002d86390>: { Op: "Get", URL: "https://34.83.110.108/api/v1/namespaces/statefulset-5891/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar", Err: <*net.OpError | 0xc00217dc20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0038d56b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 110, 108], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0010dd4a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 27 00:00:18.140: FAIL: Get "https://34.83.110.108/api/v1/namespaces/statefulset-5891/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar": dial tcp 34.83.110.108:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc0014e7ba0}, 0xc00226e000) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000e228e8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc00073be48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc0014e7ba0?, 0xc00073be88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc0014e7ba0}, 0x3, 0x3, 0xc00226e000) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 +0x3d0 E1127 00:00:18.141213 8165 runtime.go:79] Observed a panic: types.GinkgoError{Heading:"Your Test Panicked", Message:"When you, or your assertion library, calls Ginkgo's Fail(),\nGinkgo panics to prevent subsequent assertions from running.\n\nNormally Ginkgo rescues this panic so you shouldn't see it.\n\nHowever, if you make an assertion in a goroutine, Ginkgo can't capture the panic.\nTo circumvent this, you should call\n\n\tdefer GinkgoRecover()\n\nat the top of the goroutine that caused this panic.\n\nAlternatively, you may have made an assertion outside of a Ginkgo\nleaf node (e.g. in a container node or some out-of-band function) - please move your assertion to\nan appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...).", DocLink:"mental-model-how-ginkgo-handles-failure", CodeLocation:types.CodeLocation{FileName:"test/e2e/framework/statefulset/rest.go", LineNumber:69, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc0014e7ba0}, 0xc00226e000)\n\ttest/e2e/framework/statefulset/rest.go:69 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1()\n\ttest/e2e/framework/statefulset/wait.go:37 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000e228e8, 0x2fdb16a?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc00073be48?, 0x262a967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc0014e7ba0?, 0xc00073be88?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc0014e7ba0}, 0x3, 0x3, 0xc00226e000)\n\ttest/e2e/framework/statefulset/wait.go:35 +0xbd\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...)\n\ttest/e2e/framework/statefulset/wait.go:80\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.11()\n\ttest/e2e/apps/statefulset.go:719 +0x3d0", CustomMessage:""}} (�[1m�[38;5;9mYour Test Panicked�[0m �[38;5;243mtest/e2e/framework/statefulset/rest.go:69�[0m When you, or your assertion library, calls Ginkgo's Fail(), Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. However, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. Alternatively, you may have made an assertion outside of a Ginkgo leaf node (e.g. in a container node or some out-of-band function) - please move your assertion to an appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...). �[1mLearn more at:�[0m �[38;5;14m�[4mhttp://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure�[0m ) goroutine 758 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x70eb7e0?, 0xc0006fb960}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0006fb960?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x70eb7e0, 0xc0006fb960}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc001610900, 0xb6}, {0xc0012335a8?, 0x75b521a?, 0xc0012335c8?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0036a00b0, 0xa1}, {0xc001233640?, 0xc0036a00b0?, 0xc001233668?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fadf60, 0xc002d86390}, {0x0?, 0xc0031585d0?, 0x10?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc0014e7ba0}, 0xc00226e000) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc000e228e8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc00073be48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc0014e7ba0?, 0xc00073be88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc0014e7ba0}, 0x3, 0x3, 0xc00226e000) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 +0x3d0 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0006dbe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 +0x1b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 +0x98 created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 +0xe3d [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Nov 27 00:00:18.180: INFO: Deleting all statefulset in ns statefulset-5891 Nov 27 00:00:18.219: INFO: Unexpected error: <*url.Error | 0xc002d868d0>: { Op: "Get", URL: "https://34.83.110.108/apis/apps/v1/namespaces/statefulset-5891/statefulsets", Err: <*net.OpError | 0xc00217dea0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0036297d0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 110, 108], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0010dd980>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 27 00:00:18.220: FAIL: Get "https://34.83.110.108/apis/apps/v1/namespaces/statefulset-5891/statefulsets": dial tcp 34.83.110.108:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x801de88, 0xc0014e7ba0}, {0xc0022d4a80, 0x10}) test/e2e/framework/statefulset/rest.go:76 +0x113 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:129 +0x1b2 [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 27 00:00:18.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/27/22 00:00:18.259 STEP: Collecting events from namespace "statefulset-5891". 11/27/22 00:00:18.259 Nov 27 00:00:18.298: INFO: Unexpected error: failed to list events in namespace "statefulset-5891": <*url.Error | 0xc003629800>: { Op: "Get", URL: "https://34.83.110.108/api/v1/namespaces/statefulset-5891/events", Err: <*net.OpError | 0xc003838280>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0038d5c20>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 110, 108], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000609220>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 27 00:00:18.299: FAIL: failed to list events in namespace "statefulset-5891": Get "https://34.83.110.108/api/v1/namespaces/statefulset-5891/events": dial tcp 34.83.110.108:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0012725c0, {0xc0022d4a80, 0x10}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0014e7ba0}, {0xc0022d4a80, 0x10}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001272650?, {0xc0022d4a80?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0010ec1e0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000204bb0?, 0xc003c88fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc002ccca48?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000204bb0?, 0x29449fc?}, {0xae73300?, 0xc003c88f80?, 0xc000e23578?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 STEP: Destroying namespace "statefulset-5891" for this suite. 11/27/22 00:00:18.299 Nov 27 00:00:18.338: FAIL: Couldn't delete ns: "statefulset-5891": Delete "https://34.83.110.108/api/v1/namespaces/statefulset-5891": dial tcp 34.83.110.108:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.110.108/api/v1/namespaces/statefulset-5891", Err:(*net.OpError)(0xc003a340a0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0010ec1e0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0002049c0?, 0xc002a6dfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0002049c0?, 0x0?}, {0xae73300?, 0x5?, 0xc001047a88?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sScaling\sshould\shappen\sin\spredictable\sorder\sand\shalt\sif\sany\sstateful\spod\sis\sunhealthy\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00126a2d0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/27/22 00:05:36.612 Nov 27 00:05:36.612: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/27/22 00:05:36.614 Nov 27 00:07:36.663: INFO: Unexpected error: <*fmt.wrapError | 0xc003680000>: { msg: "wait for service account \"default\" in namespace \"statefulset-928\": timed out waiting for the condition", err: <*errors.errorString | 0xc0002499e0>{ s: "timed out waiting for the condition", }, } Nov 27 00:07:36.663: FAIL: wait for service account "default" in namespace "statefulset-928": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00126a2d0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 27 00:07:36.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/27/22 00:07:36.8 STEP: Collecting events from namespace "statefulset-928". 11/27/22 00:07:36.8 STEP: Found 0 events. 11/27/22 00:07:36.843 Nov 27 00:07:36.884: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 00:07:36.884: INFO: Nov 27 00:07:36.926: INFO: Logging node info for node bootstrap-e2e-master Nov 27 00:07:36.967: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 05f7f9a2-a79e-4352-8a79-75844a59633a 3637 0 2022-11-26 23:53:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 23:54:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 23:54:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-27 00:04:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:04:53 +0000 UTC,LastTransitionTime:2022-11-26 23:53:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:04:53 +0000 UTC,LastTransitionTime:2022-11-26 23:53:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:04:53 +0000 UTC,LastTransitionTime:2022-11-26 23:53:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:04:53 +0000 UTC,LastTransitionTime:2022-11-26 23:54:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.110.108,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0fdb3cfe29f66637553465718381a2f8,SystemUUID:0fdb3cfe-29f6-6637-5534-65718381a2f8,BootID:21ea7dd3-945c-4bf1-ab0c-68e321320196,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 00:07:36.968: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 27 00:07:37.019: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 27 00:07:37.107: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 23:53:08 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.107: INFO: Container kube-scheduler ready: true, restart count 4 Nov 27 00:07:37.107: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 23:53:08 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.107: INFO: Container kube-apiserver ready: true, restart count 3 Nov 27 00:07:37.107: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 23:53:24 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.107: INFO: Container kube-addon-manager ready: true, restart count 1 Nov 27 00:07:37.107: INFO: metadata-proxy-v0.1-phnnv started at 2022-11-26 23:54:19 +0000 UTC (0+2 container statuses recorded) Nov 27 00:07:37.107: INFO: Container metadata-proxy ready: true, restart count 0 Nov 27 00:07:37.107: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 27 00:07:37.107: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 23:53:24 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.107: INFO: Container l7-lb-controller ready: false, restart count 5 Nov 27 00:07:37.107: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 23:53:08 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.107: INFO: Container etcd-container ready: true, restart count 2 Nov 27 00:07:37.107: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 23:53:08 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.107: INFO: Container etcd-container ready: true, restart count 1 Nov 27 00:07:37.107: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 23:53:08 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.107: INFO: Container konnectivity-server-container ready: true, restart count 2 Nov 27 00:07:37.107: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 23:53:08 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.107: INFO: Container kube-controller-manager ready: false, restart count 6 Nov 27 00:07:37.426: INFO: Latency metrics for node bootstrap-e2e-master Nov 27 00:07:37.426: INFO: Logging node info for node bootstrap-e2e-minion-group-1tnv Nov 27 00:07:37.493: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-1tnv 846ba36d-94f6-4e94-b203-fd107e853327 3920 0 2022-11-26 23:53:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-1tnv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-1tnv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2739":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-2574":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-4384":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-9757":"bootstrap-e2e-minion-group-1tnv","csi-mock-csi-mock-volumes-8953":"csi-mock-csi-mock-volumes-8953"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 23:53:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 23:58:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-27 00:03:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-27 00:06:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-1tnv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-27 00:03:58 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-27 00:03:58 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-27 00:03:58 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-27 00:03:58 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-27 00:03:58 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-27 00:03:58 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-27 00:03:58 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:03:46 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:03:46 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:03:46 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:03:46 +0000 UTC,LastTransitionTime:2022-11-26 23:53:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.94.215,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-1tnv.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-1tnv.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b50e399c30b5a1961e4b37e487500979,SystemUUID:b50e399c-30b5-a196-1e4b-37e487500979,BootID:049b80da-b98b-4e50-9ce2-87280edcdc78,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-2574^f8338ec6-6de5-11ed-85b7-e62276dae122 kubernetes.io/csi/csi-hostpath-provisioning-2870^fe43d939-6de5-11ed-aaa2-5eafa0978347 kubernetes.io/csi/csi-hostpath-provisioning-715^f80dfac9-6de5-11ed-8ee2-5697939b789b],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-2027^1fb4840f-6de6-11ed-b41a-96c9bb8b92a9,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-2739^2a7e0a80-6de6-11ed-83a6-224e25ee64d6,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2870^fe43d939-6de5-11ed-aaa2-5eafa0978347,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-715^f80dfac9-6de5-11ed-8ee2-5697939b789b,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2574^f8338ec6-6de5-11ed-85b7-e62276dae122,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-1004^f7751827-6de5-11ed-a986-3af9dba8f2ba,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9757^1e551df0-6de6-11ed-acef-92a46bd148c0,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4384^1da125f1-6de6-11ed-9f92-ce37b6f7d123,DevicePath:,},},Config:nil,},} Nov 27 00:07:37.493: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-1tnv Nov 27 00:07:37.539: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-1tnv Nov 27 00:07:37.636: INFO: csi-hostpathplugin-0 started at 2022-11-26 23:57:56 +0000 UTC (0+7 container statuses recorded) Nov 27 00:07:37.636: INFO: Container csi-attacher ready: true, restart count 0 Nov 27 00:07:37.636: INFO: Container csi-provisioner ready: true, restart count 0 Nov 27 00:07:37.636: INFO: Container csi-resizer ready: true, restart count 0 Nov 27 00:07:37.636: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 27 00:07:37.636: INFO: Container hostpath ready: true, restart count 0 Nov 27 00:07:37.636: INFO: Container liveness-probe ready: true, restart count 0 Nov 27 00:07:37.636: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 27 00:07:37.636: INFO: test-hostpath-type-qkvzt started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.636: INFO: Container host-path-testing ready: true, restart count 0 Nov 27 00:07:37.636: INFO: pod-subpath-test-dynamicpv-smkl started at 2022-11-26 23:57:31 +0000 UTC (1+2 container statuses recorded) Nov 27 00:07:37.636: INFO: Init container init-volume-dynamicpv-smkl ready: false, restart count 0 Nov 27 00:07:37.636: INFO: Container test-container-subpath-dynamicpv-smkl ready: false, restart count 0 Nov 27 00:07:37.636: INFO: Container test-container-volume-dynamicpv-smkl ready: false, restart count 0 Nov 27 00:07:37.636: INFO: csi-hostpathplugin-0 started at 2022-11-26 23:57:31 +0000 UTC (0+7 container statuses recorded) Nov 27 00:07:37.636: INFO: Container csi-attacher ready: false, restart count 5 Nov 27 00:07:37.636: INFO: Container csi-provisioner ready: false, restart count 5 Nov 27 00:07:37.636: INFO: Container csi-resizer ready: false, restart count 5 Nov 27 00:07:37.636: INFO: Container csi-snapshotter ready: false, restart count 5 Nov 27 00:07:37.636: INFO: Container hostpath ready: false, restart count 5 Nov 27 00:07:37.636: INFO: Container liveness-probe ready: false, restart count 5 Nov 27 00:07:37.636: INFO: Container node-driver-registrar ready: false, restart count 5 Nov 27 00:07:37.636: INFO: csi-hostpathplugin-0 started at 2022-11-26 23:57:31 +0000 UTC (0+7 container statuses recorded) Nov 27 00:07:37.636: INFO: Container csi-attacher ready: true, restart count 4 Nov 27 00:07:37.636: INFO: Container csi-provisioner ready: true, restart count 4 Nov 27 00:07:37.636: INFO: Container csi-resizer ready: true, restart count 4 Nov 27 00:07:37.636: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 27 00:07:37.636: INFO: Container hostpath ready: true, restart count 4 Nov 27 00:07:37.636: INFO: Container liveness-probe ready: true, restart count 4 Nov 27 00:07:37.636: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 27 00:07:37.636: INFO: csi-hostpathplugin-0 started at 2022-11-26 23:57:31 +0000 UTC (0+7 container statuses recorded) Nov 27 00:07:37.636: INFO: Container csi-attacher ready: true, restart count 2 Nov 27 00:07:37.636: INFO: Container csi-provisioner ready: true, restart count 2 Nov 27 00:07:37.636: INFO: Container csi-resizer ready: true, restart count 2 Nov 27 00:07:37.636: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 27 00:07:37.636: INFO: Container hostpath ready: true, restart count 2 Nov 27 00:07:37.636: INFO: Container liveness-probe ready: true, restart count 2 Nov 27 00:07:37.636: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 27 00:07:37.636: INFO: coredns-6d97d5ddb-2d8xq started at 2022-11-26 23:54:12 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.636: INFO: Container coredns ready: false, restart count 6 Nov 27 00:07:37.636: INFO: hostexec-bootstrap-e2e-minion-group-1tnv-r9ch5 started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.636: INFO: Container agnhost-container ready: false, restart count 4 Nov 27 00:07:37.636: INFO: hostpath-injector started at 2022-11-26 23:57:32 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.636: INFO: Container hostpath-injector ready: true, restart count 1 Nov 27 00:07:37.636: INFO: pod-subpath-test-inlinevolume-xwjz started at 2022-11-26 23:55:43 +0000 UTC (1+2 container statuses recorded) Nov 27 00:07:37.636: INFO: Init container init-volume-inlinevolume-xwjz ready: true, restart count 1 Nov 27 00:07:37.636: INFO: Container test-container-subpath-inlinevolume-xwjz ready: false, restart count 3 Nov 27 00:07:37.636: INFO: Container test-container-volume-inlinevolume-xwjz ready: false, restart count 3 Nov 27 00:07:37.636: INFO: csi-hostpathplugin-0 started at 2022-11-26 23:55:45 +0000 UTC (0+7 container statuses recorded) Nov 27 00:07:37.636: INFO: Container csi-attacher ready: false, restart count 6 Nov 27 00:07:37.636: INFO: Container csi-provisioner ready: false, restart count 6 Nov 27 00:07:37.636: INFO: Container csi-resizer ready: false, restart count 6 Nov 27 00:07:37.636: INFO: Container csi-snapshotter ready: false, restart count 6 Nov 27 00:07:37.636: INFO: Container hostpath ready: false, restart count 6 Nov 27 00:07:37.636: INFO: Container liveness-probe ready: false, restart count 6 Nov 27 00:07:37.636: INFO: Container node-driver-registrar ready: false, restart count 4 Nov 27 00:07:37.636: INFO: csi-hostpathplugin-0 started at 2022-11-26 23:55:46 +0000 UTC (0+7 container statuses recorded) Nov 27 00:07:37.636: INFO: Container csi-attacher ready: false, restart count 4 Nov 27 00:07:37.636: INFO: Container csi-provisioner ready: false, restart count 4 Nov 27 00:07:37.636: INFO: Container csi-resizer ready: false, restart count 4 Nov 27 00:07:37.636: INFO: Container csi-snapshotter ready: false, restart count 4 Nov 27 00:07:37.636: INFO: Container hostpath ready: false, restart count 4 Nov 27 00:07:37.636: INFO: Container liveness-probe ready: false, restart count 4 Nov 27 00:07:37.636: INFO: Container node-driver-registrar ready: false, restart count 3 Nov 27 00:07:37.636: INFO: csi-hostpathplugin-0 started at 2022-11-26 23:55:45 +0000 UTC (0+7 container statuses recorded) Nov 27 00:07:37.636: INFO: Container csi-attacher ready: true, restart count 3 Nov 27 00:07:37.636: INFO: Container csi-provisioner ready: true, restart count 3 Nov 27 00:07:37.636: INFO: Container csi-resizer ready: true, restart count 3 Nov 27 00:07:37.636: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 27 00:07:37.636: INFO: Container hostpath ready: true, restart count 3 Nov 27 00:07:37.636: INFO: Container liveness-probe ready: true, restart count 3 Nov 27 00:07:37.636: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 27 00:07:37.636: INFO: csi-mockplugin-0 started at 2022-11-26 23:57:32 +0000 UTC (0+4 container statuses recorded) Nov 27 00:07:37.636: INFO: Container busybox ready: true, restart count 2 Nov 27 00:07:37.636: INFO: Container csi-provisioner ready: false, restart count 4 Nov 27 00:07:37.636: INFO: Container driver-registrar ready: true, restart count 3 Nov 27 00:07:37.636: INFO: Container mock ready: true, restart count 3 Nov 27 00:07:37.636: INFO: kube-proxy-bootstrap-e2e-minion-group-1tnv started at 2022-11-26 23:53:52 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.636: INFO: Container kube-proxy ready: false, restart count 6 Nov 27 00:07:37.636: INFO: metadata-proxy-v0.1-j9784 started at 2022-11-26 23:53:53 +0000 UTC (0+2 container statuses recorded) Nov 27 00:07:37.636: INFO: Container metadata-proxy ready: true, restart count 0 Nov 27 00:07:37.636: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 27 00:07:37.636: INFO: konnectivity-agent-m7xg5 started at 2022-11-26 23:54:05 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.636: INFO: Container konnectivity-agent ready: true, restart count 6 Nov 27 00:07:37.636: INFO: pod-configmaps-471b06fb-f42e-40a3-8279-cdc82605fde5 started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.636: INFO: Container agnhost-container ready: false, restart count 0 Nov 27 00:07:37.636: INFO: csi-hostpathplugin-0 started at 2022-11-26 23:55:45 +0000 UTC (0+7 container statuses recorded) Nov 27 00:07:37.636: INFO: Container csi-attacher ready: true, restart count 6 Nov 27 00:07:37.636: INFO: Container csi-provisioner ready: true, restart count 6 Nov 27 00:07:37.636: INFO: Container csi-resizer ready: true, restart count 6 Nov 27 00:07:37.636: INFO: Container csi-snapshotter ready: true, restart count 6 Nov 27 00:07:37.636: INFO: Container hostpath ready: true, restart count 6 Nov 27 00:07:37.636: INFO: Container liveness-probe ready: true, restart count 6 Nov 27 00:07:37.636: INFO: Container node-driver-registrar ready: false, restart count 5 Nov 27 00:07:37.636: INFO: hostexec-bootstrap-e2e-minion-group-1tnv-pm8mn started at 2022-11-26 23:57:31 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.636: INFO: Container agnhost-container ready: true, restart count 2 Nov 27 00:07:37.636: INFO: hostexec-bootstrap-e2e-minion-group-1tnv-25kq4 started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:37.636: INFO: Container agnhost-container ready: true, restart count 3 Nov 27 00:07:38.354: INFO: Latency metrics for node bootstrap-e2e-minion-group-1tnv Nov 27 00:07:38.354: INFO: Logging node info for node bootstrap-e2e-minion-group-2qlj Nov 27 00:07:38.395: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-2qlj 2a4622eb-989e-4a24-9c67-05b1d3225d2a 3518 0 2022-11-26 23:53:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-2qlj kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 23:53:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 23:54:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-27 00:03:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-27 00:03:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-2qlj,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-27 00:03:59 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-27 00:03:59 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-27 00:03:59 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-27 00:03:59 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-27 00:03:59 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-27 00:03:59 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-27 00:03:59 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:03:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:53 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:03:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:53 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:03:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:53 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:03:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.127.85.27,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-2qlj.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-2qlj.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ff92829536287513f2f9166c8337ac18,SystemUUID:ff928295-3628-7513-f2f9-166c8337ac18,BootID:f8a0168e-4102-47f3-bee3-ed410278cdb0,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 00:07:38.395: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-2qlj Nov 27 00:07:38.438: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-2qlj Nov 27 00:07:38.491: INFO: hostexec-bootstrap-e2e-minion-group-2qlj-ctpzz started at 2022-11-26 23:58:18 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.491: INFO: Container agnhost-container ready: false, restart count 5 Nov 27 00:07:38.491: INFO: metadata-proxy-v0.1-z976q started at 2022-11-26 23:53:54 +0000 UTC (0+2 container statuses recorded) Nov 27 00:07:38.491: INFO: Container metadata-proxy ready: true, restart count 0 Nov 27 00:07:38.491: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 27 00:07:38.491: INFO: hostexec-bootstrap-e2e-minion-group-2qlj-cl8vd started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.491: INFO: Container agnhost-container ready: true, restart count 5 Nov 27 00:07:38.491: INFO: kube-dns-autoscaler-5f6455f985-rm4jx started at 2022-11-26 23:54:05 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.491: INFO: Container autoscaler ready: true, restart count 6 Nov 27 00:07:38.491: INFO: pod-subpath-test-inlinevolume-pvb2 started at 2022-11-26 23:55:43 +0000 UTC (1+1 container statuses recorded) Nov 27 00:07:38.491: INFO: Init container init-volume-inlinevolume-pvb2 ready: true, restart count 0 Nov 27 00:07:38.491: INFO: Container test-container-subpath-inlinevolume-pvb2 ready: false, restart count 0 Nov 27 00:07:38.491: INFO: hostexec-bootstrap-e2e-minion-group-2qlj-tl888 started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.491: INFO: Container agnhost-container ready: false, restart count 3 Nov 27 00:07:38.491: INFO: test-hostpath-type-tgsmt started at 2022-11-26 23:58:01 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.491: INFO: Container host-path-testing ready: false, restart count 0 Nov 27 00:07:38.491: INFO: kube-proxy-bootstrap-e2e-minion-group-2qlj started at 2022-11-26 23:53:53 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.491: INFO: Container kube-proxy ready: false, restart count 6 Nov 27 00:07:38.491: INFO: l7-default-backend-8549d69d99-8xv7s started at 2022-11-26 23:54:05 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.491: INFO: Container default-http-backend ready: true, restart count 0 Nov 27 00:07:38.491: INFO: volume-snapshot-controller-0 started at 2022-11-26 23:54:05 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.491: INFO: Container volume-snapshot-controller ready: false, restart count 5 Nov 27 00:07:38.491: INFO: konnectivity-agent-j22pm started at 2022-11-26 23:54:05 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.491: INFO: Container konnectivity-agent ready: false, restart count 5 Nov 27 00:07:38.491: INFO: hostexec-bootstrap-e2e-minion-group-2qlj-pmsct started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.491: INFO: Container agnhost-container ready: true, restart count 4 Nov 27 00:07:38.491: INFO: hostexec-bootstrap-e2e-minion-group-2qlj-74lkx started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.491: INFO: Container agnhost-container ready: true, restart count 3 Nov 27 00:07:38.491: INFO: ss-0 started at 2022-11-26 23:57:56 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.491: INFO: Container webserver ready: false, restart count 4 Nov 27 00:07:38.491: INFO: coredns-6d97d5ddb-rsdrv started at 2022-11-26 23:54:05 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.491: INFO: Container coredns ready: true, restart count 5 Nov 27 00:07:38.491: INFO: hostexec-bootstrap-e2e-minion-group-2qlj-chl64 started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.491: INFO: Container agnhost-container ready: true, restart count 2 Nov 27 00:07:38.691: INFO: Latency metrics for node bootstrap-e2e-minion-group-2qlj Nov 27 00:07:38.691: INFO: Logging node info for node bootstrap-e2e-minion-group-rbx5 Nov 27 00:07:38.732: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-rbx5 71a3da10-2d41-41fe-9331-fb855b0bb42f 3516 0 2022-11-26 23:53:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-rbx5 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-rbx5 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 23:53:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 23:54:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-27 00:03:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-27 00:03:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-rbx5,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-27 00:03:58 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-27 00:03:58 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-27 00:03:58 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-27 00:03:58 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-27 00:03:58 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-27 00:03:58 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-27 00:03:58 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:03:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:03:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:03:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:03:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.203.146.23,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-rbx5.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-rbx5.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3e37064ccccf109daf185b691fbc1e81,SystemUUID:3e37064c-cccf-109d-af18-5b691fbc1e81,BootID:aa4950bb-b649-4cf2-8496-bbb949f31f9b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 00:07:38.733: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-rbx5 Nov 27 00:07:38.777: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-rbx5 Nov 27 00:07:38.828: INFO: var-expansion-4169e1b9-3913-491f-a159-d5a6eec89535 started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.828: INFO: Container dapi-container ready: false, restart count 0 Nov 27 00:07:38.828: INFO: mutability-test-mz5vr started at 2022-11-26 23:57:31 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.828: INFO: Container netexec ready: false, restart count 3 Nov 27 00:07:38.828: INFO: kube-proxy-bootstrap-e2e-minion-group-rbx5 started at 2022-11-26 23:53:52 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.828: INFO: Container kube-proxy ready: false, restart count 5 Nov 27 00:07:38.828: INFO: metrics-server-v0.5.2-867b8754b9-vxr4m started at 2022-11-26 23:54:28 +0000 UTC (0+2 container statuses recorded) Nov 27 00:07:38.828: INFO: Container metrics-server ready: false, restart count 6 Nov 27 00:07:38.828: INFO: Container metrics-server-nanny ready: false, restart count 6 Nov 27 00:07:38.828: INFO: hostexec-bootstrap-e2e-minion-group-rbx5-9ldp9 started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.828: INFO: Container agnhost-container ready: true, restart count 2 Nov 27 00:07:38.828: INFO: test-hostpath-type-42r9q started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.828: INFO: Container host-path-testing ready: false, restart count 0 Nov 27 00:07:38.828: INFO: mutability-test-cjthc started at 2022-11-26 23:57:31 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.828: INFO: Container netexec ready: false, restart count 5 Nov 27 00:07:38.828: INFO: forbid-27825117-vtsr7 started at 2022-11-26 23:57:31 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.828: INFO: Container c ready: true, restart count 3 Nov 27 00:07:38.828: INFO: hostpath-symlink-prep-provisioning-2430 started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.828: INFO: Container init-volume-provisioning-2430 ready: false, restart count 0 Nov 27 00:07:38.828: INFO: pod-back-off-image started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.828: INFO: Container back-off ready: false, restart count 7 Nov 27 00:07:38.828: INFO: csi-mockplugin-0 started at 2022-11-26 23:55:45 +0000 UTC (0+3 container statuses recorded) Nov 27 00:07:38.828: INFO: Container csi-provisioner ready: false, restart count 6 Nov 27 00:07:38.828: INFO: Container driver-registrar ready: true, restart count 5 Nov 27 00:07:38.828: INFO: Container mock ready: false, restart count 6 Nov 27 00:07:38.828: INFO: metadata-proxy-v0.1-vhj54 started at 2022-11-26 23:53:53 +0000 UTC (0+2 container statuses recorded) Nov 27 00:07:38.828: INFO: Container metadata-proxy ready: true, restart count 0 Nov 27 00:07:38.828: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 27 00:07:38.828: INFO: konnectivity-agent-9dsln started at 2022-11-26 23:54:05 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.828: INFO: Container konnectivity-agent ready: false, restart count 5 Nov 27 00:07:38.828: INFO: execpod-acceptlqjfs started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.828: INFO: Container agnhost-container ready: false, restart count 3 Nov 27 00:07:38.828: INFO: pod-590cbda1-2e3c-40a4-93b4-3c3eeb5112d3 started at 2022-11-26 23:55:43 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.828: INFO: Container write-pod ready: false, restart count 0 Nov 27 00:07:38.828: INFO: csi-mockplugin-attacher-0 started at 2022-11-26 23:55:45 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.828: INFO: Container csi-attacher ready: true, restart count 4 Nov 27 00:07:38.828: INFO: lb-internal-4qqp2 started at 2022-11-26 23:58:08 +0000 UTC (0+1 container statuses recorded) Nov 27 00:07:38.828: INFO: Container netexec ready: true, restart count 5 Nov 27 00:07:39.028: INFO: Latency metrics for node bootstrap-e2e-minion-group-rbx5 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 STEP: Destroying namespace "statefulset-928" for this suite. 11/27/22 00:07:39.028
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-auth\]\sServiceAccounts\sshould\ssupport\sInClusterConfig\swith\stoken\srotation\s\[Slow\]$'
test/e2e/auth/service_accounts.go:497 k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:497 +0x877 There were additional failures detected after the initial failure: [FAILED] Nov 27 00:15:12.459: Couldn't delete ns: "svcaccounts-2228": Delete "https://34.83.110.108/api/v1/namespaces/svcaccounts-2228": dial tcp 34.83.110.108:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.110.108/api/v1/namespaces/svcaccounts-2228", Err:(*net.OpError)(0xc005110870)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 STEP: Creating a kubernetes client 11/27/22 00:14:07.815 Nov 27 00:14:07.815: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename svcaccounts 11/27/22 00:14:07.816 STEP: Waiting for a default service account to be provisioned in namespace 11/27/22 00:14:07.979 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/27/22 00:14:08.064 [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 [It] should support InClusterConfig with token rotation [Slow] test/e2e/auth/service_accounts.go:432 Nov 27 00:14:10.996: INFO: created pod Nov 27 00:14:10.996: INFO: Waiting up to 1m0s for 1 pods to be running and ready: [inclusterclient] Nov 27 00:14:10.996: INFO: Waiting up to 1m0s for pod "inclusterclient" in namespace "svcaccounts-2228" to be "running and ready" Nov 27 00:14:11.074: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 77.683601ms Nov 27 00:14:11.074: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on '' to be 'Running' but was 'Pending' Nov 27 00:14:13.140: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14336251s Nov 27 00:14:13.140: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on '' to be 'Running' but was 'Pending' Nov 27 00:14:15.129: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132338576s Nov 27 00:14:15.129: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on '' to be 'Running' but was 'Pending' Nov 27 00:14:17.136: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13907456s Nov 27 00:14:17.136: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on '' to be 'Running' but was 'Pending' Nov 27 00:14:23.072: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 12.075907006s Nov 27 00:14:23.072: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on '' to be 'Running' but was 'Pending' Nov 27 00:14:23.162: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 12.165575s Nov 27 00:14:23.162: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on '' to be 'Running' but was 'Pending' Nov 27 00:14:25.127: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 14.130563689s Nov 27 00:14:25.127: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Pending' Nov 27 00:14:27.133: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 16.136145339s Nov 27 00:14:27.133: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Pending' Nov 27 00:14:29.121: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 18.124588931s Nov 27 00:14:29.121: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Pending' Nov 27 00:14:31.129: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 20.132707793s Nov 27 00:14:31.129: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Pending' Nov 27 00:14:33.125: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 22.128828555s Nov 27 00:14:33.125: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Pending' Nov 27 00:14:35.121: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 24.124410345s Nov 27 00:14:35.121: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:14:37.128: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 26.13120321s Nov 27 00:14:37.128: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:14:39.160: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 28.16334297s Nov 27 00:14:39.160: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:14:41.130: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 30.133689813s Nov 27 00:14:41.130: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:14:43.154: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 32.157744886s Nov 27 00:14:43.154: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:14:45.159: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 34.162721183s Nov 27 00:14:45.159: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:14:47.145: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 36.14846006s Nov 27 00:14:47.145: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:14:49.218: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 38.221934448s Nov 27 00:14:49.218: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:14:51.152: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 40.155582344s Nov 27 00:14:51.152: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:14:53.126: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 42.129776342s Nov 27 00:14:53.126: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:14:55.120: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 44.123750243s Nov 27 00:14:55.120: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:14:57.126: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 46.129673553s Nov 27 00:14:57.126: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:14:59.129: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 48.132210333s Nov 27 00:14:59.129: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:15:01.132: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 50.135556314s Nov 27 00:15:01.132: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:15:03.130: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 52.133824815s Nov 27 00:15:03.130: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:15:05.139: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 54.142705162s Nov 27 00:15:05.139: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:15:07.155: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 56.158227794s Nov 27 00:15:07.155: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:15:09.134: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 58.137128462s Nov 27 00:15:09.134: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:15:11.199: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 1m0.2022894s Nov 27 00:15:11.199: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:15:11.277: INFO: Pod "inclusterclient": Phase="Failed", Reason="", readiness=false. Elapsed: 1m0.28037641s Nov 27 00:15:11.277: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-2qlj' to be 'Running' but was 'Failed' Nov 27 00:15:11.277: INFO: Pod inclusterclient failed to be running and ready. Nov 27 00:15:11.277: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [inclusterclient] Nov 27 00:15:11.277: FAIL: pod "inclusterclient" in ns "svcaccounts-2228" never became ready Full Stack Trace k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:497 +0x877 [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 Nov 27 00:15:11.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/27/22 00:15:11.368 STEP: Collecting events from namespace "svcaccounts-2228". 11/27/22 00:15:11.368 STEP: Found 5 events. 11/27/22 00:15:11.438 Nov 27 00:15:11.438: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for inclusterclient: { } Scheduled: Successfully assigned svcaccounts-2228/inclusterclient to bootstrap-e2e-minion-group-2qlj Nov 27 00:15:11.438: INFO: At 2022-11-27 00:14:28 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-2qlj} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 27 00:15:11.438: INFO: At 2022-11-27 00:14:28 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-2qlj} Created: Created container inclusterclient Nov 27 00:15:11.438: INFO: At 2022-11-27 00:14:28 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-2qlj} Started: Started container inclusterclient Nov 27 00:15:11.438: INFO: At 2022-11-27 00:14:30 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-2qlj} Killing: Stopping container inclusterclient Nov 27 00:15:11.515: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 00:15:11.515: INFO: inclusterclient bootstrap-e2e-minion-group-2qlj Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-27 00:14:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-27 00:14:31 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-27 00:14:31 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-27 00:14:23 +0000 UTC }] Nov 27 00:15:11.515: INFO: Nov 27 00:15:11.605: INFO: Unable to fetch svcaccounts-2228/inclusterclient/inclusterclient logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) Nov 27 00:15:11.669: INFO: Logging node info for node bootstrap-e2e-master Nov 27 00:15:11.725: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 05f7f9a2-a79e-4352-8a79-75844a59633a 4121 0 2022-11-26 23:53:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 23:54:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 23:54:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-27 00:11:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:54:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.110.108,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0fdb3cfe29f66637553465718381a2f8,SystemUUID:0fdb3cfe-29f6-6637-5534-65718381a2f8,BootID:21ea7dd3-945c-4bf1-ab0c-68e321320196,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 00:15:11.725: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 27 00:15:11.808: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 27 00:15:11.878: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 27 00:15:11.878: INFO: Logging node info for node bootstrap-e2e-minion-group-1tnv Nov 27 00:15:11.971: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-1tnv 846ba36d-94f6-4e94-b203-fd107e853327 5932 0 2022-11-26 23:53:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-1tnv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-1tnv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-1004":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-multivolume-57":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-2574":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-4384":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-715":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-9757":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-volumeio-8036":"bootstrap-e2e-minion-group-1tnv","csi-mock-csi-mock-volumes-8953":"csi-mock-csi-mock-volumes-8953"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 23:53:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-27 00:11:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-27 00:14:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-27 00:15:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-1tnv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.94.215,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-1tnv.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-1tnv.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b50e399c30b5a1961e4b37e487500979,SystemUUID:b50e399c-30b5-a196-1e4b-37e487500979,BootID:049b80da-b98b-4e50-9ce2-87280edcdc78,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-2574^f8338ec6-6de5-11ed-85b7-e62276dae122 kubernetes.io/csi/csi-hostpath-provisioning-2870^fe43d939-6de5-11ed-aaa2-5eafa0978347 kubernetes.io/csi/csi-hostpath-provisioning-715^f80dfac9-6de5-11ed-8ee2-5697939b789b],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2870^fe43d939-6de5-11ed-aaa2-5eafa0978347,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-715^f80dfac9-6de5-11ed-8ee2-5697939b789b,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2574^f8338ec6-6de5-11ed-85b7-e62276dae122,DevicePath:,},},Config:nil,},} Nov 27 00:15:11.971: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-1tnv Nov 27 00:15:12.081: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-1tnv Nov 27 00:15:12.183: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-1tnv: error trying to reach service: No agent available Nov 27 00:15:12.183: INFO: Logging node info for node bootstrap-e2e-minion-group-2qlj Nov 27 00:15:12.223: INFO: Error getting node info Get "https://34.83.110.108/api/v1/nodes/bootstrap-e2e-minion-group-2qlj": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:12.223: INFO: Node Info: &Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 00:15:12.223: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-2qlj Nov 27 00:15:12.262: INFO: Unexpected error retrieving node events Get "https://34.83.110.108/api/v1/namespaces/kube-system/events?fieldSelector=involvedObject.kind%3DNode%2CinvolvedObject.namespace%3D%2Csource%3Dkubelet%2CinvolvedObject.name%3Dbootstrap-e2e-minion-group-2qlj": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:12.262: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-2qlj Nov 27 00:15:12.301: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-2qlj: Get "https://34.83.110.108/api/v1/nodes/bootstrap-e2e-minion-group-2qlj:10250/proxy/pods": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:12.302: INFO: Logging node info for node bootstrap-e2e-minion-group-rbx5 Nov 27 00:15:12.341: INFO: Error getting node info Get "https://34.83.110.108/api/v1/nodes/bootstrap-e2e-minion-group-rbx5": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:12.341: INFO: Node Info: &Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 00:15:12.341: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-rbx5 Nov 27 00:15:12.380: INFO: Unexpected error retrieving node events Get "https://34.83.110.108/api/v1/namespaces/kube-system/events?fieldSelector=involvedObject.kind%3DNode%2CinvolvedObject.namespace%3D%2Csource%3Dkubelet%2CinvolvedObject.name%3Dbootstrap-e2e-minion-group-rbx5": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:12.380: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-rbx5 Nov 27 00:15:12.419: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-rbx5: Get "https://34.83.110.108/api/v1/nodes/bootstrap-e2e-minion-group-rbx5:10250/proxy/pods": dial tcp 34.83.110.108:443: connect: connection refused [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 STEP: Destroying namespace "svcaccounts-2228" for this suite. 11/27/22 00:15:12.419 Nov 27 00:15:12.459: FAIL: Couldn't delete ns: "svcaccounts-2228": Delete "https://34.83.110.108/api/v1/namespaces/svcaccounts-2228": dial tcp 34.83.110.108:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.110.108/api/v1/namespaces/svcaccounts-2228", Err:(*net.OpError)(0xc005110870)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0012c24b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0039bbf20?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0039bbf20?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swith\s\-\-leave\-stdin\-open$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000b622d0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/27/22 00:15:44.023 Nov 27 00:15:44.023: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/27/22 00:15:44.025 Nov 27 00:15:44.064: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:46.104: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:48.104: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:50.104: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:52.104: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:54.103: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:56.104: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:58.104: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:00.104: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:02.103: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:04.103: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:06.104: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:08.103: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:10.104: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:12.103: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:14.104: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:14.143: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:14.143: INFO: Unexpected error: <*errors.errorString | 0xc000215d80>: { s: "timed out waiting for the condition", } Nov 27 00:16:14.143: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000b622d0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 27 00:16:14.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/27/22 00:16:14.183 [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0014113b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [FAILED] Nov 27 00:09:51.273: failed to list events in namespace "kubectl-6930": Get "https://34.83.110.108/api/v1/namespaces/kubectl-6930/events": dial tcp 34.83.110.108:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 27 00:09:51.313: Couldn't delete ns: "kubectl-6930": Delete "https://34.83.110.108/api/v1/namespaces/kubectl-6930": dial tcp 34.83.110.108:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.110.108/api/v1/namespaces/kubectl-6930", Err:(*net.OpError)(0xc005190460)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/27/22 00:07:39.495 Nov 27 00:07:39.495: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/27/22 00:07:39.497 Nov 27 00:09:51.193: INFO: Unexpected error: <*fmt.wrapError | 0xc005166000>: { msg: "wait for service account \"default\" in namespace \"kubectl-6930\": timed out waiting for the condition", err: <*errors.errorString | 0xc00017da00>{ s: "timed out waiting for the condition", }, } Nov 27 00:09:51.193: FAIL: wait for service account "default" in namespace "kubectl-6930": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0014113b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 27 00:09:51.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/27/22 00:09:51.233 STEP: Collecting events from namespace "kubectl-6930". 11/27/22 00:09:51.233 Nov 27 00:09:51.273: INFO: Unexpected error: failed to list events in namespace "kubectl-6930": <*url.Error | 0xc0051741e0>: { Op: "Get", URL: "https://34.83.110.108/api/v1/namespaces/kubectl-6930/events", Err: <*net.OpError | 0xc004ae69b0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003978750>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 110, 108], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00397a040>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 27 00:09:51.273: FAIL: failed to list events in namespace "kubectl-6930": Get "https://34.83.110.108/api/v1/namespaces/kubectl-6930/events": dial tcp 34.83.110.108:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00056a5c0, {0xc0051610d0, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0035f7380}, {0xc0051610d0, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00056a650?, {0xc0051610d0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0014113b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc004f06f20?, 0xc003f07fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc004f06f20?, 0x0?}, {0xae73300?, 0x5?, 0xc004afc120?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-6930" for this suite. 11/27/22 00:09:51.273 Nov 27 00:09:51.313: FAIL: Couldn't delete ns: "kubectl-6930": Delete "https://34.83.110.108/api/v1/namespaces/kubectl-6930": dial tcp 34.83.110.108:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.110.108/api/v1/namespaces/kubectl-6930", Err:(*net.OpError)(0xc005190460)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0014113b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc004f06ea0?, 0xc003cebf08?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc0050ee1b0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc004f06ea0?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never\,\sbut\swith\s\-\-rm$'
test/e2e/framework/kubectl/builder.go:87 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc00256e9a0?, 0x0?}, {0xc004cfffc0, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc004cfffc0, 0xc}, {0xc0025302c0, 0x145}, {0xc00564bf38?, 0x3?, 0xc0032c8f08?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:414 +0x17f There were additional failures detected after the initial failure: [FAILED] Nov 27 00:15:12.417: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.110.108 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3049 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://34.83.110.108/api/v1/namespaces/kubectl-3049/pods/httpd": dial tcp 34.83.110.108:443: connect: connection refused error: exit status 1 In [AfterEach] at: test/e2e/framework/kubectl/builder.go:87 ---------- [FAILED] Nov 27 00:15:12.496: failed to list events in namespace "kubectl-3049": Get "https://34.83.110.108/api/v1/namespaces/kubectl-3049/events": dial tcp 34.83.110.108:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 27 00:15:12.535: Couldn't delete ns: "kubectl-3049": Delete "https://34.83.110.108/api/v1/namespaces/kubectl-3049": dial tcp 34.83.110.108:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.110.108/api/v1/namespaces/kubectl-3049", Err:(*net.OpError)(0xc004fcb590)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/27/22 00:15:11.311 Nov 27 00:15:11.311: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/27/22 00:15:11.313 STEP: Waiting for a default service account to be provisioned in namespace 11/27/22 00:15:11.572 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/27/22 00:15:11.663 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/27/22 00:15:11.758 Nov 27 00:15:11.758: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.110.108 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3049 create -f -' Nov 27 00:15:12.305: INFO: rc: 1 Nov 27 00:15:12.305: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc001473f30>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.110.108 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3049 create -f -:\nCommand stdout:\n\nstderr:\nerror: error when creating \"STDIN\": Post \"https://34.83.110.108/api/v1/namespaces/kubectl-3049/pods?fieldManager=kubectl-create&fieldValidation=Strict\": dial tcp 34.83.110.108:443: connect: connection refused\n\nerror:\nexit status 1", }, Code: 1, } Nov 27 00:15:12.305: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.110.108 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3049 create -f -: Command stdout: stderr: error: error when creating "STDIN": Post "https://34.83.110.108/api/v1/namespaces/kubectl-3049/pods?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 34.83.110.108:443: connect: connection refused error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc00256e9a0?, 0x0?}, {0xc004cfffc0, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc004cfffc0, 0xc}, {0xc0025302c0, 0x145}, {0xc00564bf38?, 0x3?, 0xc0032c8f08?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:414 +0x17f [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/27/22 00:15:12.306 Nov 27 00:15:12.306: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.110.108 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3049 delete --grace-period=0 --force -f -' Nov 27 00:15:12.416: INFO: rc: 1 Nov 27 00:15:12.417: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc0018a4580>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.110.108 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3049 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when deleting \"STDIN\": Delete \"https://34.83.110.108/api/v1/namespaces/kubectl-3049/pods/httpd\": dial tcp 34.83.110.108:443: connect: connection refused\n\nerror:\nexit status 1", }, Code: 1, } Nov 27 00:15:12.417: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.110.108 --kubeconfig=/workspace/.kube/config --namespace=kubectl-3049 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://34.83.110.108/api/v1/namespaces/kubectl-3049/pods/httpd": dial tcp 34.83.110.108:443: connect: connection refused error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc0026cadc0?, 0x0?}, {0xc004cfffc0, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc004cfffc0, 0xc}, {0xc0025302c0, 0x145}, {0xc00564bec0?, 0x8?, 0x7fb57d881a68?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc0025302c0, 0x145}, {0xc004cfffc0, 0xc}, {0xc0018a4370, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 27 00:15:12.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/27/22 00:15:12.456 STEP: Collecting events from namespace "kubectl-3049". 11/27/22 00:15:12.456 Nov 27 00:15:12.496: INFO: Unexpected error: failed to list events in namespace "kubectl-3049": <*url.Error | 0xc004ff8bd0>: { Op: "Get", URL: "https://34.83.110.108/api/v1/namespaces/kubectl-3049/events", Err: <*net.OpError | 0xc004fcb0e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005009380>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 110, 108], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00417e700>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 27 00:15:12.496: FAIL: failed to list events in namespace "kubectl-3049": Get "https://34.83.110.108/api/v1/namespaces/kubectl-3049/events": dial tcp 34.83.110.108:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0052d85c0, {0xc004cfffc0, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0056e41a0}, {0xc004cfffc0, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0052d8650?, {0xc004cfffc0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0014113b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001472100?, 0xc00264afb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc002a47dc8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001472100?, 0x29449fc?}, {0xae73300?, 0xc00264af80?, 0xc00264af70?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-3049" for this suite. 11/27/22 00:15:12.496 Nov 27 00:15:12.535: FAIL: Couldn't delete ns: "kubectl-3049": Delete "https://34.83.110.108/api/v1/namespaces/kubectl-3049": dial tcp 34.83.110.108:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.110.108/api/v1/namespaces/kubectl-3049", Err:(*net.OpError)(0xc004fcb590)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0014113b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001472070?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001472070?, 0x7fe0bc8?}, {0xae73300?, 0x10000c0034ad7a0?, 0xc0055662d0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sAddon\supdate\sshould\spropagate\sadd\-on\sfile\schanges\s\[Slow\]$'
test/e2e/cloud/gcp/addon_update.go:353 k8s.io/kubernetes/test/e2e/cloud/gcp.waitForReplicationControllerInAddonTest({0x801de88?, 0xc0052be1a0?}, {0x75ce977?, 0x4?}, {0x760025e?, 0xc005014108?}, 0x1d?) test/e2e/cloud/gcp/addon_update.go:353 +0x54 k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func1.3() test/e2e/cloud/gcp/addon_update.go:311 +0x1025 There were additional failures detected after the initial failure: [FAILED] Nov 27 00:19:13.174: failed to list events in namespace "addon-update-test-8479": Get "https://34.83.110.108/api/v1/namespaces/addon-update-test-8479/events": dial tcp 34.83.110.108:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 27 00:19:13.214: Couldn't delete ns: "addon-update-test-8479": Delete "https://34.83.110.108/api/v1/namespaces/addon-update-test-8479": dial tcp 34.83.110.108:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.110.108/api/v1/namespaces/addon-update-test-8479", Err:(*net.OpError)(0xc00536f9f0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-cloud-provider-gcp] Addon update set up framework | framework.go:178 STEP: Creating a kubernetes client 11/27/22 00:14:06.966 Nov 27 00:14:06.966: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename addon-update-test 11/27/22 00:14:06.968 STEP: Waiting for a default service account to be provisioned in namespace 11/27/22 00:14:07.339 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/27/22 00:14:07.427 [BeforeEach] [sig-cloud-provider-gcp] Addon update test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cloud-provider-gcp] Addon update test/e2e/cloud/gcp/addon_update.go:223 [It] should propagate add-on file changes [Slow] test/e2e/cloud/gcp/addon_update.go:244 Nov 27 00:14:11.148: INFO: Executing 'mkdir -p addon-test-dir/addon-update-test-8479' on 34.83.110.108:22 Nov 27 00:14:11.368: INFO: Writing remote file 'addon-test-dir/addon-update-test-8479/addon-reconcile-controller.yaml' on 34.83.110.108:22 Nov 27 00:14:11.486: INFO: Writing remote file 'addon-test-dir/addon-update-test-8479/addon-reconcile-controller-Updated.yaml' on 34.83.110.108:22 Nov 27 00:14:11.604: INFO: Writing remote file 'addon-test-dir/addon-update-test-8479/addon-deprecated-label-service.yaml' on 34.83.110.108:22 Nov 27 00:14:11.722: INFO: Writing remote file 'addon-test-dir/addon-update-test-8479/addon-deprecated-label-service-updated.yaml' on 34.83.110.108:22 Nov 27 00:14:11.839: INFO: Writing remote file 'addon-test-dir/addon-update-test-8479/addon-ensure-exists-service.yaml' on 34.83.110.108:22 Nov 27 00:14:11.957: INFO: Writing remote file 'addon-test-dir/addon-update-test-8479/addon-ensure-exists-service-updated.yaml' on 34.83.110.108:22 Nov 27 00:14:12.075: INFO: Writing remote file 'addon-test-dir/addon-update-test-8479/invalid-addon-controller.yaml' on 34.83.110.108:22 Nov 27 00:14:12.193: INFO: Executing 'sudo rm -rf /etc/kubernetes/addons/addon-test-dir' on 34.83.110.108:22 Nov 27 00:14:12.295: INFO: Executing 'sudo mkdir -p /etc/kubernetes/addons/addon-test-dir/addon-update-test-8479' on 34.83.110.108:22 STEP: copy invalid manifests to the destination dir 11/27/22 00:14:12.387 Nov 27 00:14:12.387: INFO: Executing 'sudo cp addon-test-dir/addon-update-test-8479/invalid-addon-controller.yaml /etc/kubernetes/addons/addon-test-dir/addon-update-test-8479/invalid-addon-controller.yaml' on 34.83.110.108:22 STEP: copy new manifests 11/27/22 00:14:12.477 Nov 27 00:14:12.477: INFO: Executing 'sudo cp addon-test-dir/addon-update-test-8479/addon-reconcile-controller.yaml /etc/kubernetes/addons/addon-test-dir/addon-update-test-8479/addon-reconcile-controller.yaml' on 34.83.110.108:22 Nov 27 00:14:12.566: INFO: Executing 'sudo cp addon-test-dir/addon-update-test-8479/addon-deprecated-label-service.yaml /etc/kubernetes/addons/addon-test-dir/addon-update-test-8479/addon-deprecated-label-service.yaml' on 34.83.110.108:22 Nov 27 00:14:12.656: INFO: Executing 'sudo cp addon-test-dir/addon-update-test-8479/addon-ensure-exists-service.yaml /etc/kubernetes/addons/addon-test-dir/addon-update-test-8479/addon-ensure-exists-service.yaml' on 34.83.110.108:22 Nov 27 00:14:12.807: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:14:15.872: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:14:18.859: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:14:23.052: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:14:24.889: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:14:27.860: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:14:30.856: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:14:33.908: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:14:36.857: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:14:39.913: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:14:42.860: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:14:45.857: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:14:48.857: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:14:51.854: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:14:54.850: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:14:57.886: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:15:00.880: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:15:03.889: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:15:06.887: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:15:09.906: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 27 00:15:12.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:15:15.847: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:15:18.847: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:15:21.847: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:15:24.847: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:15:27.847: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:15:30.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:15:33.847: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:15:36.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:15:39.847: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:15:42.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:15:45.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:15:48.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:15:51.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:15:54.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:15:57.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:00.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:03.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:06.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:09.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:12.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:15.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:18.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:21.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:24.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:27.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:30.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:33.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:36.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:39.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:42.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:45.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:48.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:51.847: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:54.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:16:57.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:00.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:03.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:06.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:09.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:12.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:15.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:18.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:21.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:24.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:27.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:30.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:33.847: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:36.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:39.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:42.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:45.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:48.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:51.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:54.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:17:57.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:00.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:03.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:06.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:09.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:12.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:15.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:18.847: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:21.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:24.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:27.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:30.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:33.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:36.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:39.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:42.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:45.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:48.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:51.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:54.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:18:57.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:19:00.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:19:03.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:19:06.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:19:09.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). ------------------------------ Progress Report for Ginkgo Process #5 Automatically polling progress: [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow] (Spec Runtime: 5m4.182s) test/e2e/cloud/gcp/addon_update.go:244 In [It] (Node Runtime: 5m0s) test/e2e/cloud/gcp/addon_update.go:244 At [By Step] copy new manifests (Step Runtime: 4m58.671s) test/e2e/cloud/gcp/addon_update.go:300 Spec Goroutine goroutine 7609 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc005014630, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x2bc362c?, 0xc0054b3680?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc005203280?, 0x66e0100?, 0xacfb400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/cloud/gcp.waitForReplicationController({0x801de88?, 0xc0052be1a0}, {0x75ce977, 0xb}, {0x760025e, 0x14}, 0x1, 0xc005231740?, 0x0?) test/e2e/cloud/gcp/addon_update.go:367 > k8s.io/kubernetes/test/e2e/cloud/gcp.waitForReplicationControllerInAddonTest({0x801de88?, 0xc0052be1a0?}, {0x75ce977?, 0x4?}, {0x760025e?, 0xc005014108?}, 0x1d?) test/e2e/cloud/gcp/addon_update.go:353 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func1.3() test/e2e/cloud/gcp/addon_update.go:311 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc005464300, 0xc000e51da0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:19:12.846: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:19:12.885: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://34.83.110.108/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": dial tcp 34.83.110.108:443: connect: connection refused). Nov 27 00:19:12.885: INFO: Unexpected error: <*errors.errorString | 0xc001065a70>: { s: "error waiting for ReplicationController kube-system/addon-reconcile-test to appear: timed out waiting for the condition", } Nov 27 00:19:12.885: FAIL: error waiting for ReplicationController kube-system/addon-reconcile-test to appear: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/cloud/gcp.waitForReplicationControllerInAddonTest({0x801de88?, 0xc0052be1a0?}, {0x75ce977?, 0x4?}, {0x760025e?, 0xc005014108?}, 0x1d?) test/e2e/cloud/gcp/addon_update.go:353 +0x54 k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func1.3() test/e2e/cloud/gcp/addon_update.go:311 +0x1025 Nov 27 00:19:12.886: INFO: Cleaning up ensure exist class addon. Nov 27 00:19:12.925: INFO: Unexpected error: <*url.Error | 0xc001633290>: { Op: "Delete", URL: "https://34.83.110.108/api/v1/namespaces/kube-system/services/addon-ensure-exists-test", Err: <*net.OpError | 0xc0054fd040>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005510360>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 110, 108], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004f81e60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 27 00:19:12.925: FAIL: Delete "https://34.83.110.108/api/v1/namespaces/kube-system/services/addon-ensure-exists-test": dial tcp 34.83.110.108:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func1.3.1() test/e2e/cloud/gcp/addon_update.go:308 +0xe5 panic({0x70eb7e0, 0xc0054e5650}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc001598000, 0x77}, {0xc00045b7a8?, 0xc001598000?, 0xc00045b7d0?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3ee0, 0xc001065a70}, {0x0?, 0x760025e?, 0x14?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/cloud/gcp.waitForReplicationControllerInAddonTest({0x801de88?, 0xc0052be1a0?}, {0x75ce977?, 0x4?}, {0x760025e?, 0xc005014108?}, 0x1d?) test/e2e/cloud/gcp/addon_update.go:353 +0x54 k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func1.3() test/e2e/cloud/gcp/addon_update.go:311 +0x1025 Nov 27 00:19:12.925: INFO: Executing 'sudo rm -rf /etc/kubernetes/addons/addon-test-dir' on 34.83.110.108:22 Nov 27 00:19:13.014: INFO: Executing 'rm -rf addon-test-dir' on 34.83.110.108:22 [AfterEach] [sig-cloud-provider-gcp] Addon update test/e2e/framework/node/init/init.go:32 Nov 27 00:19:13.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-cloud-provider-gcp] Addon update test/e2e/cloud/gcp/addon_update.go:237 [DeferCleanup (Each)] [sig-cloud-provider-gcp] Addon update test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cloud-provider-gcp] Addon update dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/27/22 00:19:13.134 STEP: Collecting events from namespace "addon-update-test-8479". 11/27/22 00:19:13.134 Nov 27 00:19:13.174: INFO: Unexpected error: failed to list events in namespace "addon-update-test-8479": <*url.Error | 0xc005230f60>: { Op: "Get", URL: "https://34.83.110.108/api/v1/namespaces/addon-update-test-8479/events", Err: <*net.OpError | 0xc003402280>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002dfa210>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 110, 108], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0055011a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 27 00:19:13.174: FAIL: failed to list events in namespace "addon-update-test-8479": Get "https://34.83.110.108/api/v1/namespaces/addon-update-test-8479/events": dial tcp 34.83.110.108:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0053e25c0, {0xc005014108, 0x16}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0052be1a0}, {0xc005014108, 0x16}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0053e2650?, {0xc005014108?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc001275d10) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0054b8200?, 0x2622b3d?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x3?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0054b8200?, 0x2623270?}, {0xae73300?, 0x26225bd?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-cloud-provider-gcp] Addon update tear down framework | framework.go:193 STEP: Destroying namespace "addon-update-test-8479" for this suite. 11/27/22 00:19:13.175 Nov 27 00:19:13.214: FAIL: Couldn't delete ns: "addon-update-test-8479": Delete "https://34.83.110.108/api/v1/namespaces/addon-update-test-8479": dial tcp 34.83.110.108:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.110.108/api/v1/namespaces/addon-update-test-8479", Err:(*net.OpError)(0xc00536f9f0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001275d10) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0054b8180?, 0xc00290efb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0054b8180?, 0x0?}, {0xae73300?, 0x5?, 0xc005014ea0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\shandle\supdates\sto\sExternalTrafficPolicy\sfield$'
test/e2e/network/loadbalancer.go:1492 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1492 +0x155from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 23:57:18.715 Nov 26 23:57:18.715: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 23:57:18.717 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 23:57:18.854 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 23:57:18.935 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should handle updates to ExternalTrafficPolicy field test/e2e/network/loadbalancer.go:1480 STEP: creating a service esipp-1750/external-local-update with type=LoadBalancer 11/26/22 23:57:19.178 STEP: setting ExternalTrafficPolicy=Local 11/26/22 23:57:19.178 STEP: waiting for loadbalancer for service esipp-1750/external-local-update 11/26/22 23:57:19.242 Nov 26 23:57:19.242: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer Nov 27 00:00:17.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:19.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:21.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:23.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:25.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:27.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:29.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:31.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:33.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:35.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:37.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:39.321: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:41.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:43.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:45.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:47.324: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:49.321: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:51.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:53.321: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:55.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:57.325: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:59.321: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:11.321: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:13.321: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:15.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:17.324: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 5m0.418s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 5m0.002s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 4m59.891s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:02:19.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:21.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:23.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:25.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:27.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:29.321: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:31.324: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:33.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:35.321: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:37.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 5m20.421s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 5m20.004s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 5m19.894s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:02:39.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:41.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:43.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:45.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:47.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:49.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:51.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:53.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:55.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:57.324: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 5m40.423s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 5m40.006s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 5m39.896s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:02:59.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:01.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:03.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:05.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:07.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:09.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:11.321: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:13.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:15.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:17.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 6m0.425s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 6m0.009s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 5m59.898s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:03:19.321: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:21.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:23.321: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:25.321: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:27.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 6m20.428s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 6m20.012s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 6m19.901s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 6m40.432s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 6m40.015s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 6m39.905s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 7m0.434s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 7m0.018s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 6m59.907s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 7m20.437s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 7m20.02s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 7m19.91s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 7m40.438s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 7m40.022s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 7m39.911s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 8m0.441s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 8m0.025s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 7m59.914s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 8m20.443s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 8m20.027s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 8m19.916s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 8m40.446s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 8m40.03s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 8m39.919s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 9m0.448s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 9m0.032s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 8m59.921s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 9m20.449s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 9m20.033s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 9m19.922s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 9m40.452s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 9m40.036s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 9m39.925s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 10m0.454s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 10m0.038s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 9m59.927s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 10m20.456s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 10m20.04s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 10m19.929s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 10m40.538s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 10m40.122s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 10m40.011s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:08:15.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:17.321: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 11m0.539s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 11m0.123s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 11m0.012s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:08:19.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:21.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:23.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:25.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:27.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:29.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:31.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:33.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:35.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:37.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:39.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 11m21.533s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 11m21.116s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 11m21.006s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:08:41.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:43.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:45.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:47.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:49.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:51.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:53.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:55.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:57.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:59.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 11m41.535s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 11m41.119s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 11m41.008s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:09:01.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:03.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:05.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:07.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:09.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:11.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:13.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:15.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:17.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:19.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 12m1.537s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 12m1.121s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 12m1.01s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:09:21.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:23.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:25.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:27.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:29.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:31.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:33.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:35.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:37.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:39.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 12m21.54s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 12m21.124s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 12m21.013s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:09:41.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:43.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:45.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:47.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:49.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:51.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:53.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:55.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:57.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:59.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 12m41.542s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 12m41.126s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 12m41.015s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:10:01.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:03.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:05.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:07.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:09.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:11.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:13.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:15.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:17.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:19.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 13m1.544s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 13m1.128s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 13m1.017s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:10:21.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:23.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:25.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:27.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:29.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:31.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:33.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:35.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:37.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:39.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 13m21.548s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 13m21.132s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 13m21.021s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:10:41.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:43.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:45.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:47.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:49.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:51.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:53.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:55.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:57.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:59.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 13m41.551s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 13m41.135s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 13m41.024s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:11:01.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:03.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:05.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:07.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:09.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:11.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:13.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:15.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:17.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:19.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 14m1.553s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 14m1.137s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 14m1.026s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:11:21.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:23.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:25.322: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:27.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:29.323: INFO: Retrying .... error trying to get Service external-local-update: Get "https://34.83.110.108/api/v1/namespaces/esipp-1750/services/external-local-update": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 14m21.556s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 14m21.139s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 14m21.029s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #23 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 14m41.558s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 14m41.142s) test/e2e/network/loadbalancer.go:1480 At [By Step] waiting for loadbalancer for service esipp-1750/external-local-update (Step Runtime: 14m41.031s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 943 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0033592c0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0015f7680?, 0xc004427440?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000bc19b0?, 0x7fa7740?, 0xc000214c40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003ca78b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003ca78b0, 0x45?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003ca78b0, 0x6aba880?, 0xc0044276f0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003ca78b0, 0x0?, 0x1, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1491 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0041b9fa0, 0x2fd9502}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:12:19.364: INFO: Unexpected error: <*fmt.wrapError | 0xc0010dbde0>: { msg: "timed out waiting for service \"external-local-update\" to have a load balancer: timed out waiting for the condition", err: <*errors.errorString | 0xc000213cb0>{ s: "timed out waiting for the condition", }, } Nov 27 00:12:19.364: FAIL: timed out waiting for service "external-local-update" to have a load balancer: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1492 +0x155 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 27 00:12:19.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 27 00:12:19.446: INFO: Output of kubectl describe svc: Nov 27 00:12:19.446: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.110.108 --kubeconfig=/workspace/.kube/config --namespace=esipp-1750 describe svc --namespace=esipp-1750' Nov 27 00:12:19.778: INFO: stderr: "" Nov 27 00:12:19.778: INFO: stdout: "Name: external-local-update\nNamespace: esipp-1750\nLabels: testid=external-local-update-ccb98e59-9e4a-432b-8fa5-9c1bd905d2ac\nAnnotations: <none>\nSelector: testid=external-local-update-ccb98e59-9e4a-432b-8fa5-9c1bd905d2ac\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.76.45\nIPs: 10.0.76.45\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nNodePort: <unset> 32638/TCP\nEndpoints: <none>\nSession Affinity: None\nExternal Traffic Policy: Local\nHealthCheck NodePort: 32437\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 14m service-controller Ensuring load balancer\n" Nov 27 00:12:19.778: INFO: Name: external-local-update Namespace: esipp-1750 Labels: testid=external-local-update-ccb98e59-9e4a-432b-8fa5-9c1bd905d2ac Annotations: <none> Selector: testid=external-local-update-ccb98e59-9e4a-432b-8fa5-9c1bd905d2ac Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.76.45 IPs: 10.0.76.45 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 32638/TCP Endpoints: <none> Session Affinity: None External Traffic Policy: Local HealthCheck NodePort: 32437 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 14m service-controller Ensuring load balancer [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/27/22 00:12:19.778 STEP: Collecting events from namespace "esipp-1750". 11/27/22 00:12:19.778 STEP: Found 1 events. 11/27/22 00:12:19.82 Nov 27 00:12:19.820: INFO: At 2022-11-26 23:57:43 +0000 UTC - event for external-local-update: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 27 00:12:19.862: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 00:12:19.862: INFO: Nov 27 00:12:19.910: INFO: Logging node info for node bootstrap-e2e-master Nov 27 00:12:19.952: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 05f7f9a2-a79e-4352-8a79-75844a59633a 4121 0 2022-11-26 23:53:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 23:54:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 23:54:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-27 00:11:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:54:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.110.108,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0fdb3cfe29f66637553465718381a2f8,SystemUUID:0fdb3cfe-29f6-6637-5534-65718381a2f8,BootID:21ea7dd3-945c-4bf1-ab0c-68e321320196,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 00:12:19.953: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 27 00:12:20.001: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 27 00:12:20.044: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 27 00:12:20.044: INFO: Logging node info for node bootstrap-e2e-minion-group-1tnv Nov 27 00:12:20.086: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-1tnv 846ba36d-94f6-4e94-b203-fd107e853327 4213 0 2022-11-26 23:53:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-1tnv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-1tnv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2739":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-2574":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-4384":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-715":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-9757":"bootstrap-e2e-minion-group-1tnv","csi-mock-csi-mock-volumes-8953":"csi-mock-csi-mock-volumes-8953"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 23:53:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 23:58:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-27 00:11:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-27 00:12:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-1tnv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.94.215,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-1tnv.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-1tnv.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b50e399c30b5a1961e4b37e487500979,SystemUUID:b50e399c-30b5-a196-1e4b-37e487500979,BootID:049b80da-b98b-4e50-9ce2-87280edcdc78,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-2574^f8338ec6-6de5-11ed-85b7-e62276dae122 kubernetes.io/csi/csi-hostpath-provisioning-2870^fe43d939-6de5-11ed-aaa2-5eafa0978347 kubernetes.io/csi/csi-hostpath-provisioning-715^f80dfac9-6de5-11ed-8ee2-5697939b789b],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-2027^1fb4840f-6de6-11ed-b41a-96c9bb8b92a9,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-2739^2a7e0a80-6de6-11ed-83a6-224e25ee64d6,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2870^fe43d939-6de5-11ed-aaa2-5eafa0978347,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-715^f80dfac9-6de5-11ed-8ee2-5697939b789b,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2574^f8338ec6-6de5-11ed-85b7-e62276dae122,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-1004^f7751827-6de5-11ed-a986-3af9dba8f2ba,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9757^1e551df0-6de6-11ed-acef-92a46bd148c0,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4384^1da125f1-6de6-11ed-9f92-ce37b6f7d123,DevicePath:,},},Config:nil,},} Nov 27 00:12:20.086: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-1tnv Nov 27 00:12:20.130: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-1tnv Nov 27 00:12:20.172: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-1tnv: error trying to reach service: No agent available Nov 27 00:12:20.172: INFO: Logging node info for node bootstrap-e2e-minion-group-2qlj Nov 27 00:12:20.214: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-2qlj 2a4622eb-989e-4a24-9c67-05b1d3225d2a 4181 0 2022-11-26 23:53:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-2qlj kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 23:53:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 23:54:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-27 00:11:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-27 00:11:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-2qlj,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:53 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:53 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:53 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.127.85.27,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-2qlj.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-2qlj.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ff92829536287513f2f9166c8337ac18,SystemUUID:ff928295-3628-7513-f2f9-166c8337ac18,BootID:f8a0168e-4102-47f3-bee3-ed410278cdb0,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 00:12:20.214: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-2qlj Nov 27 00:12:20.258: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-2qlj Nov 27 00:12:20.300: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-2qlj: error trying to reach service: No agent available Nov 27 00:12:20.300: INFO: Logging node info for node bootstrap-e2e-minion-group-rbx5 Nov 27 00:12:20.342: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-rbx5 71a3da10-2d41-41fe-9331-fb855b0bb42f 4174 0 2022-11-26 23:53:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-rbx5 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-rbx5 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 23:53:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 23:54:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-27 00:11:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-27 00:11:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-rbx5,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.203.146.23,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-rbx5.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-rbx5.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3e37064ccccf109daf185b691fbc1e81,SystemUUID:3e37064c-cccf-109d-af18-5b691fbc1e81,BootID:aa4950bb-b649-4cf2-8496-bbb949f31f9b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 00:12:20.342: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-rbx5 Nov 27 00:12:20.385: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-rbx5 Nov 27 00:12:20.427: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-rbx5: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-1750" for this suite. 11/27/22 00:12:20.427
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\sonly\starget\snodes\swith\sendpoints$'
test/e2e/network/loadbalancer.go:1363 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1363 +0x130 There were additional failures detected after the initial failure: [FAILED] Nov 27 00:29:47.125: failed to list events in namespace "esipp-8310": Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/events": dial tcp 34.83.110.108:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 27 00:29:47.165: Couldn't delete ns: "esipp-8310": Delete "https://34.83.110.108/api/v1/namespaces/esipp-8310": dial tcp 34.83.110.108:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.110.108/api/v1/namespaces/esipp-8310", Err:(*net.OpError)(0xc003716730)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/27/22 00:14:46.153 Nov 27 00:14:46.153: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/27/22 00:14:46.155 STEP: Waiting for a default service account to be provisioned in namespace 11/27/22 00:14:46.339 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/27/22 00:14:46.431 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should only target nodes with endpoints test/e2e/network/loadbalancer.go:1346 STEP: creating a service esipp-8310/external-local-nodes with type=LoadBalancer 11/27/22 00:14:46.703 STEP: setting ExternalTrafficPolicy=Local 11/27/22 00:14:46.703 STEP: waiting for loadbalancer for service esipp-8310/external-local-nodes 11/27/22 00:14:46.81 Nov 27 00:14:46.810: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer Nov 27 00:15:12.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:14.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:16.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:18.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:20.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:22.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:24.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:26.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:28.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:30.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:32.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:34.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:36.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:38.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:40.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:42.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:44.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:46.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:48.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:50.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:52.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:54.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:56.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:58.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:00.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:02.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:04.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:06.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:08.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:10.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:12.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:14.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:16.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:18.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:20.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:22.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:24.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:26.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:28.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:30.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:32.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:34.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:36.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:38.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:40.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:42.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:44.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:46.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:48.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:50.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:52.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:54.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:56.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:16:58.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:00.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:02.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:04.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:06.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:08.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:10.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:12.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:14.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:16.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:18.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:20.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:22.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:24.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:26.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:28.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:30.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:32.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:34.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:36.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:38.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:40.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:42.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:44.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:46.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:48.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:50.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:52.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:54.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:56.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:17:58.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:00.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:02.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:04.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:06.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:08.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:10.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:12.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:14.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:16.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:18.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:20.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:22.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:24.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:26.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:28.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:30.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:32.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:34.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:36.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:38.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:40.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:42.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:44.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:46.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:48.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:50.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:52.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:54.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:56.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:18:58.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:00.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:02.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:04.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:06.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:08.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:10.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:12.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:14.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:16.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:18.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:20.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:22.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:24.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:26.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:28.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:30.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:32.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:34.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:36.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:38.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:40.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:42.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:44.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 5m0.5s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m0s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 4m59.843s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:19:46.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:48.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:50.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:52.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:54.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:56.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:19:58.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:00.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:02.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:04.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 5m20.502s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m20.002s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 5m19.845s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:20:06.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:08.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:10.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:12.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:14.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:16.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:18.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:20.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:22.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:24.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 5m40.506s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m40.006s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 5m39.849s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:20:26.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:28.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:30.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:32.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:34.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:36.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:38.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:40.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:42.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:20:44.901: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 6m0.508s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 6m0.008s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 5m59.851s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 6m20.51s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 6m20.01s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 6m19.852s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 6m40.511s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 6m40.011s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 6m39.854s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 7m0.513s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 7m0.013s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 6m59.856s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 7m20.516s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 7m20.016s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 7m19.859s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 7m40.519s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 7m40.019s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 7m39.861s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 8m0.52s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 8m0.02s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 7m59.863s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 8m20.523s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 8m20.023s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 8m19.865s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 8m40.524s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 8m40.024s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 8m39.867s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 9m0.527s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 9m0.027s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 8m59.869s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 9m20.528s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 9m20.028s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 9m19.871s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 9m40.53s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 9m40.03s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 9m39.873s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000d36600, 0xc001863e00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc00011b700, 0xc001863e00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc0019837c0?}, 0xc001863e00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc0019837c0, 0xc001863e00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc0038022d0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc003b516b0, 0xc001863d00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0008dbd00, 0xc001863c00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc001863c00, {0x7fad100, 0xc0008dbd00}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc003b516e0, 0xc001863c00, {0x7f58ce81ea68?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc003b516e0, 0xc001863c00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc001863a00, {0x7fe0bc8, 0xc0000820e0}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc001863a00, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*services).Get(0xc001358ce0, {0x7fe0bc8, 0xc0000820e0}, {0x7600c22, 0x14}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/service.go:79 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition.func1() test/e2e/framework/service/jig.go:620 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc001987b50?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 10m0.532s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 10m0.032s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 9m59.875s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 10m20.535s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 10m20.035s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 10m19.878s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 10m40.537s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 10m40.037s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 10m39.88s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 11m0.539s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 11m0.039s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 10m59.882s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 11m20.542s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 11m20.042s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 11m19.884s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 11m40.544s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 11m40.044s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 11m39.887s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 12m0.546s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 12m0.046s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 11m59.889s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 12m20.549s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 12m20.049s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 12m19.891s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:27:20.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:22.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:24.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 12m40.551s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 12m40.05s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 12m39.893s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:27:26.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:28.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:30.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:32.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:34.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:36.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:38.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:40.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:42.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:44.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 13m0.553s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 13m0.052s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 12m59.895s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:27:46.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:48.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:50.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:52.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:54.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:56.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:27:58.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:00.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:02.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:04.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 13m20.554s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 13m20.054s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 13m19.897s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:28:06.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:08.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:10.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:12.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:14.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:16.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:18.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:20.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:22.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:24.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 13m40.557s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 13m40.057s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 13m39.9s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:28:26.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:28.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:30.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:32.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:34.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:36.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:38.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:40.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:42.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:44.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 14m0.559s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 14m0.059s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 13m59.901s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:28:46.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:48.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:50.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:52.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:54.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:56.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:28:58.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:00.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:02.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:04.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 14m20.562s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 14m20.062s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 14m19.905s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:29:06.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:08.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:10.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:12.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:14.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:16.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:18.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:20.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:22.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:24.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 14m40.564s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 14m40.064s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 14m39.907s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:29:26.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:28.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:30.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:32.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:34.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:36.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:38.900: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:40.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:42.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:44.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 15m0.566s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 15m0.066s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for loadbalancer for service esipp-8310/external-local-nodes (Step Runtime: 14m59.909s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 3425 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003dac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x10?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f65c80?, 0xc00342fa60?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001987a90?, 0x7fa7740?, 0xc00017e680?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002b601e0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002b601e0, 0x44?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc002b601e0, 0x6aba880?, 0xc00342fd10) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc002b601e0, 0x0?, 0x0, 0x0?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1353 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001a5bc80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:29:46.899: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:46.939: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/services/external-local-nodes": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:29:46.939: INFO: Unexpected error: <*fmt.wrapError | 0xc002ed1ee0>: { msg: "timed out waiting for service \"external-local-nodes\" to have a load balancer: timed out waiting for the condition", err: <*errors.errorString | 0xc00017da10>{ s: "timed out waiting for the condition", }, } Nov 27 00:29:46.939: FAIL: timed out waiting for service "external-local-nodes" to have a load balancer: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1363 +0x130 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 27 00:29:46.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 27 00:29:46.978: INFO: Output of kubectl describe svc: Nov 27 00:29:46.978: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.110.108 --kubeconfig=/workspace/.kube/config --namespace=esipp-8310 describe svc --namespace=esipp-8310' Nov 27 00:29:47.085: INFO: rc: 1 Nov 27 00:29:47.085: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/27/22 00:29:47.085 STEP: Collecting events from namespace "esipp-8310". 11/27/22 00:29:47.085 Nov 27 00:29:47.124: INFO: Unexpected error: failed to list events in namespace "esipp-8310": <*url.Error | 0xc0035ea4b0>: { Op: "Get", URL: "https://34.83.110.108/api/v1/namespaces/esipp-8310/events", Err: <*net.OpError | 0xc003716410>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004941200>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 110, 108], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0013b18a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 27 00:29:47.125: FAIL: failed to list events in namespace "esipp-8310": Get "https://34.83.110.108/api/v1/namespaces/esipp-8310/events": dial tcp 34.83.110.108:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0018bc5c0, {0xc001284e40, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc00554c680}, {0xc001284e40, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0018bc650?, {0xc001284e40?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00135e000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001919f20?, 0xc004634f50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc004634f40?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001919f20?, 0x2622c40?}, {0xae73300?, 0xc004634f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-8310" for this suite. 11/27/22 00:29:47.125 Nov 27 00:29:47.165: FAIL: Couldn't delete ns: "esipp-8310": Delete "https://34.83.110.108/api/v1/namespaces/esipp-8310": dial tcp 34.83.110.108:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.110.108/api/v1/namespaces/esipp-8310", Err:(*net.OpError)(0xc003716730)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00135e000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001919e00?, 0xc00458ffb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001919e00?, 0x0?}, {0xae73300?, 0x5?, 0xc0013655d8?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=LoadBalancer$'
test/e2e/network/loadbalancer.go:1272 k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1272 +0xd8from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 23:57:18.988 Nov 26 23:57:18.988: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 23:57:18.99 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 23:57:19.151 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 23:57:19.232 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work for type=LoadBalancer test/e2e/network/loadbalancer.go:1266 STEP: creating a service esipp-728/external-local-lb with type=LoadBalancer 11/26/22 23:57:19.399 STEP: setting ExternalTrafficPolicy=Local 11/26/22 23:57:19.399 STEP: waiting for loadbalancer for service esipp-728/external-local-lb 11/26/22 23:57:19.477 Nov 26 23:57:19.477: INFO: Waiting up to 15m0s for service "external-local-lb" to have a LoadBalancer Nov 27 00:00:17.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:19.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:21.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:23.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:25.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:27.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:29.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:31.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:33.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:35.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:37.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:39.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:41.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:43.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:45.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:47.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:49.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:51.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:53.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:55.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:57.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:59.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:11.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:13.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:15.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:17.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 5m0.411s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 4m59.922s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:02:19.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:21.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:23.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:25.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:27.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:29.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:31.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:33.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:35.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:37.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 5m20.416s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 5m20.005s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 5m19.927s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:02:39.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:41.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:43.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:45.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:47.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:49.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:51.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:53.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:55.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:57.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 5m40.418s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 5m40.008s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 5m39.929s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:02:59.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:01.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:03.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:05.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:07.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:09.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:11.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:13.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:15.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:17.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 6m0.42s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 6m0.009s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 5m59.931s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:03:19.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:21.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:23.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:25.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:27.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 6m20.423s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 6m20.012s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 6m19.934s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 6m40.425s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 6m40.014s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 6m39.936s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 7m0.427s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 7m0.017s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 6m59.939s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 7m20.429s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 7m20.019s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 7m19.941s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 7m40.431s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 7m40.021s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 7m39.943s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 8m0.434s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 8m0.024s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 7m59.946s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 8m20.436s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 8m20.026s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 8m19.948s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 8m40.438s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 8m40.028s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 8m39.95s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 9m0.44s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 9m0.03s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 8m59.951s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 9m20.442s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 9m20.032s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 9m19.954s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 9m40.444s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 9m40.034s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 9m39.956s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 10m0.446s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 10m0.036s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 9m59.958s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 10m20.448s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 10m20.038s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 10m19.96s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 10m40.496s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 10m40.086s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 10m40.007s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:08:15.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:17.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 11m0.499s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 11m0.088s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 11m0.01s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:08:19.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:21.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:23.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:25.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:27.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:29.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:31.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:33.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:35.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:37.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:39.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 11m21.274s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 11m20.864s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 11m20.785s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:08:41.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:43.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:45.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:47.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:49.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:51.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:53.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:55.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:57.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:59.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 11m41.277s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 11m40.866s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 11m40.788s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:09:01.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:03.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:05.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:07.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:09.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:11.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:13.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:15.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:17.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:19.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 12m1.279s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 12m0.868s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 12m0.79s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:09:21.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:23.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:25.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:27.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:29.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:31.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:33.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:35.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:37.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:39.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 12m21.281s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 12m20.871s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 12m20.793s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:09:41.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:43.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:45.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:47.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:49.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:51.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:53.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:55.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:57.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:59.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 12m41.283s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 12m40.873s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 12m40.795s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:10:01.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:03.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:05.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:07.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:09.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:11.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:13.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:15.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:17.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:19.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 13m1.285s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 13m0.875s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 13m0.797s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:10:21.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:23.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:25.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:27.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:29.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:31.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:33.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:35.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:37.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:39.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 13m21.287s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 13m20.876s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 13m20.798s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:10:41.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:43.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:45.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:47.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:49.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:51.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:53.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:55.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:57.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:59.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 13m41.289s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 13m40.879s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 13m40.801s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:11:01.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:03.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:05.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:07.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:09.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:11.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:13.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:15.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:17.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:19.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 14m1.292s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 14m0.882s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 14m0.803s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:11:21.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:23.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:25.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:27.560: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:29.561: INFO: Retrying .... error trying to get Service external-local-lb: Get "https://34.83.110.108/api/v1/namespaces/esipp-728/services/external-local-lb": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 14m21.295s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 14m20.885s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 14m20.807s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #16 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 14m41.297s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 14m40.887s) test/e2e/network/loadbalancer.go:1266 At [By Step] waiting for loadbalancer for service esipp-728/external-local-lb (Step Runtime: 14m40.809s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 516 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0044672f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x28?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001ad04e0?, 0xc002bc9b78?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000414370?, 0x7fa7740?, 0xc0001fe580?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc0036e1130, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc0036e1130, 0x40?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc0036e1130, 0x6aba880?, 0xc002bc9e28) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc0036e1130, 0xc003ccf6c0?, 0x1, 0x9?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1271 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00453d950, 0xc0016d1260}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:12:19.603: INFO: Unexpected error: <*fmt.wrapError | 0xc000e9c4c0>: { msg: "timed out waiting for service \"external-local-lb\" to have a load balancer: timed out waiting for the condition", err: <*errors.errorString | 0xc0001fd940>{ s: "timed out waiting for the condition", }, } Nov 27 00:12:19.603: FAIL: timed out waiting for service "external-local-lb" to have a load balancer: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1272 +0xd8 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 27 00:12:19.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 27 00:12:19.690: INFO: Output of kubectl describe svc: Nov 27 00:12:19.690: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.110.108 --kubeconfig=/workspace/.kube/config --namespace=esipp-728 describe svc --namespace=esipp-728' Nov 27 00:12:20.013: INFO: stderr: "" Nov 27 00:12:20.013: INFO: stdout: "Name: external-local-lb\nNamespace: esipp-728\nLabels: testid=external-local-lb-5a0b8cee-fd3d-4741-90e9-2a0539f11b05\nAnnotations: <none>\nSelector: testid=external-local-lb-5a0b8cee-fd3d-4741-90e9-2a0539f11b05\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.27.152\nIPs: 10.0.27.152\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nNodePort: <unset> 32243/TCP\nEndpoints: <none>\nSession Affinity: None\nExternal Traffic Policy: Local\nHealthCheck NodePort: 30130\nEvents: <none>\n" Nov 27 00:12:20.013: INFO: Name: external-local-lb Namespace: esipp-728 Labels: testid=external-local-lb-5a0b8cee-fd3d-4741-90e9-2a0539f11b05 Annotations: <none> Selector: testid=external-local-lb-5a0b8cee-fd3d-4741-90e9-2a0539f11b05 Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.27.152 IPs: 10.0.27.152 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 32243/TCP Endpoints: <none> Session Affinity: None External Traffic Policy: Local HealthCheck NodePort: 30130 Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/27/22 00:12:20.013 STEP: Collecting events from namespace "esipp-728". 11/27/22 00:12:20.013 STEP: Found 0 events. 11/27/22 00:12:20.055 Nov 27 00:12:20.096: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 00:12:20.096: INFO: Nov 27 00:12:20.143: INFO: Logging node info for node bootstrap-e2e-master Nov 27 00:12:20.185: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 05f7f9a2-a79e-4352-8a79-75844a59633a 4121 0 2022-11-26 23:53:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 23:54:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 23:54:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-27 00:11:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:54:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.110.108,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0fdb3cfe29f66637553465718381a2f8,SystemUUID:0fdb3cfe-29f6-6637-5534-65718381a2f8,BootID:21ea7dd3-945c-4bf1-ab0c-68e321320196,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 00:12:20.185: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 27 00:12:20.228: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 27 00:12:20.271: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 27 00:12:20.271: INFO: Logging node info for node bootstrap-e2e-minion-group-1tnv Nov 27 00:12:20.312: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-1tnv 846ba36d-94f6-4e94-b203-fd107e853327 4213 0 2022-11-26 23:53:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-1tnv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-1tnv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2739":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-2574":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-4384":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-715":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-9757":"bootstrap-e2e-minion-group-1tnv","csi-mock-csi-mock-volumes-8953":"csi-mock-csi-mock-volumes-8953"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 23:53:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 23:58:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-27 00:11:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-27 00:12:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-1tnv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.94.215,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-1tnv.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-1tnv.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b50e399c30b5a1961e4b37e487500979,SystemUUID:b50e399c-30b5-a196-1e4b-37e487500979,BootID:049b80da-b98b-4e50-9ce2-87280edcdc78,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-2574^f8338ec6-6de5-11ed-85b7-e62276dae122 kubernetes.io/csi/csi-hostpath-provisioning-2870^fe43d939-6de5-11ed-aaa2-5eafa0978347 kubernetes.io/csi/csi-hostpath-provisioning-715^f80dfac9-6de5-11ed-8ee2-5697939b789b],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-2027^1fb4840f-6de6-11ed-b41a-96c9bb8b92a9,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-2739^2a7e0a80-6de6-11ed-83a6-224e25ee64d6,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2870^fe43d939-6de5-11ed-aaa2-5eafa0978347,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-715^f80dfac9-6de5-11ed-8ee2-5697939b789b,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2574^f8338ec6-6de5-11ed-85b7-e62276dae122,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-1004^f7751827-6de5-11ed-a986-3af9dba8f2ba,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9757^1e551df0-6de6-11ed-acef-92a46bd148c0,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4384^1da125f1-6de6-11ed-9f92-ce37b6f7d123,DevicePath:,},},Config:nil,},} Nov 27 00:12:20.313: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-1tnv Nov 27 00:12:20.361: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-1tnv Nov 27 00:12:20.403: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-1tnv: error trying to reach service: No agent available Nov 27 00:12:20.403: INFO: Logging node info for node bootstrap-e2e-minion-group-2qlj Nov 27 00:12:20.445: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-2qlj 2a4622eb-989e-4a24-9c67-05b1d3225d2a 4181 0 2022-11-26 23:53:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-2qlj kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 23:53:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 23:54:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-27 00:11:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-27 00:11:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-2qlj,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:53 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:53 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:53 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.127.85.27,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-2qlj.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-2qlj.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ff92829536287513f2f9166c8337ac18,SystemUUID:ff928295-3628-7513-f2f9-166c8337ac18,BootID:f8a0168e-4102-47f3-bee3-ed410278cdb0,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 00:12:20.445: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-2qlj Nov 27 00:12:20.492: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-2qlj Nov 27 00:12:20.534: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-2qlj: error trying to reach service: No agent available Nov 27 00:12:20.534: INFO: Logging node info for node bootstrap-e2e-minion-group-rbx5 Nov 27 00:12:20.586: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-rbx5 71a3da10-2d41-41fe-9331-fb855b0bb42f 4174 0 2022-11-26 23:53:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-rbx5 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-rbx5 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 23:53:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 23:54:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-27 00:11:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-27 00:11:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-rbx5,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.203.146.23,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-rbx5.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-rbx5.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3e37064ccccf109daf185b691fbc1e81,SystemUUID:3e37064c-cccf-109d-af18-5b691fbc1e81,BootID:aa4950bb-b649-4cf2-8496-bbb949f31f9b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 00:12:20.586: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-rbx5 Nov 27 00:12:20.629: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-rbx5 Nov 27 00:12:20.672: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-rbx5: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-728" for this suite. 11/27/22 00:12:20.672
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=NodePort$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011d6000) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/27/22 00:15:12.58 Nov 27 00:15:12.580: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/27/22 00:15:12.582 Nov 27 00:15:12.621: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:14.661: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:16.661: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:18.661: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:20.662: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:22.662: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:24.661: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:26.661: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:28.661: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:30.661: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:32.661: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:34.661: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:36.661: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:38.662: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:40.662: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:42.661: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:42.701: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:15:42.701: INFO: Unexpected error: <*errors.errorString | 0xc000195d70>: { s: "timed out waiting for the condition", } Nov 27 00:15:42.701: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011d6000) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 27 00:15:42.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/27/22 00:15:42.741 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sTCP\sservice\s\[Slow\]$'
test/e2e/network/loadbalancer.go:161 k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:161 +0x9b2from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 23:55:49.518 Nov 26 23:55:49.518: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 23:55:49.52 Nov 26 23:55:49.559: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:55:51.599: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:55:53.599: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:55:55.598: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:55:57.599: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:55:59.599: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:01.599: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:03.599: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:05.599: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:07.599: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:09.598: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:11.599: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:13.598: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:15.599: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:17.599: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 23:57:18.8 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 23:57:18.879 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to change the type and ports of a TCP service [Slow] test/e2e/network/loadbalancer.go:77 Nov 26 23:57:19.110: INFO: namespace for TCP test: loadbalancers-4887 STEP: creating a TCP service mutability-test with type=ClusterIP in namespace loadbalancers-4887 11/26/22 23:57:19.162 Nov 26 23:57:19.211: INFO: service port TCP: 80 STEP: creating a pod to be part of the TCP service mutability-test 11/26/22 23:57:19.211 Nov 26 23:57:19.256: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 23:57:19.296: INFO: Found all 1 pods Nov 26 23:57:19.296: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [mutability-test-cjthc] Nov 26 23:57:19.296: INFO: Waiting up to 2m0s for pod "mutability-test-cjthc" in namespace "loadbalancers-4887" to be "running and ready" Nov 26 23:57:19.336: INFO: Pod "mutability-test-cjthc": Phase="Pending", Reason="", readiness=false. Elapsed: 40.086297ms Nov 26 23:57:19.336: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-cjthc' on '' to be 'Running' but was 'Pending' Nov 26 23:57:21.378: INFO: Pod "mutability-test-cjthc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081972968s Nov 26 23:57:21.378: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-cjthc' on '' to be 'Running' but was 'Pending' Nov 26 23:57:23.379: INFO: Pod "mutability-test-cjthc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082586891s Nov 26 23:57:23.379: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-cjthc' on '' to be 'Running' but was 'Pending' Nov 26 23:57:25.393: INFO: Pod "mutability-test-cjthc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096850683s Nov 26 23:57:25.393: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-cjthc' on '' to be 'Running' but was 'Pending' Nov 26 23:57:27.413: INFO: Pod "mutability-test-cjthc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11707689s Nov 26 23:57:27.413: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-cjthc' on '' to be 'Running' but was 'Pending' Nov 26 23:57:29.390: INFO: Pod "mutability-test-cjthc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093437282s Nov 26 23:57:29.390: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-cjthc' on '' to be 'Running' but was 'Pending' Nov 26 23:57:31.401: INFO: Pod "mutability-test-cjthc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.104626906s Nov 26 23:57:31.401: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-cjthc' on '' to be 'Running' but was 'Pending' Nov 26 23:57:33.397: INFO: Pod "mutability-test-cjthc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.100716395s Nov 26 23:57:33.397: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-cjthc' on 'bootstrap-e2e-minion-group-rbx5' to be 'Running' but was 'Pending' Nov 26 23:57:35.394: INFO: Pod "mutability-test-cjthc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.097831733s Nov 26 23:57:35.394: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-cjthc' on 'bootstrap-e2e-minion-group-rbx5' to be 'Running' but was 'Pending' Nov 26 23:57:37.387: INFO: Pod "mutability-test-cjthc": Phase="Running", Reason="", readiness=true. Elapsed: 18.090478075s Nov 26 23:57:37.387: INFO: Pod "mutability-test-cjthc" satisfied condition "running and ready" Nov 26 23:57:37.387: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [mutability-test-cjthc] STEP: changing the TCP service to type=NodePort 11/26/22 23:57:37.387 Nov 26 23:57:37.506: INFO: TCP node port: 32737 STEP: hitting the TCP service's NodePort 11/26/22 23:57:37.506 Nov 26 23:57:37.506: INFO: Poking "http://34.83.94.215:32737/echo?msg=hello" Nov 26 23:57:37.545: INFO: Poke("http://34.83.94.215:32737/echo?msg=hello"): Get "http://34.83.94.215:32737/echo?msg=hello": dial tcp 34.83.94.215:32737: connect: connection refused Nov 26 23:57:39.546: INFO: Poking "http://34.83.94.215:32737/echo?msg=hello" Nov 26 23:57:39.630: INFO: Poke("http://34.83.94.215:32737/echo?msg=hello"): success STEP: creating a static load balancer IP 11/26/22 23:57:39.63 Nov 26 23:57:41.799: INFO: Allocated static load balancer IP: 35.227.179.144 STEP: changing the TCP service to type=LoadBalancer 11/26/22 23:57:41.799 STEP: waiting for the TCP service to have a load balancer 11/26/22 23:57:41.901 Nov 26 23:57:41.901: INFO: Waiting up to 15m0s for service "mutability-test" to have a LoadBalancer Nov 27 00:00:17.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:19.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:21.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:23.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:25.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:27.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:29.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:31.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:33.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:35.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:37.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:39.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:41.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:43.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:45.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:47.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:49.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:51.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:53.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:55.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:57.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:59.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:11.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:13.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:15.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:17.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 6m29.533s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 5m0s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 4m37.15s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:02:19.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:21.992: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:23.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:25.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:27.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:29.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:31.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:33.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:36.002: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:37.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 6m49.535s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 5m20.002s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 4m57.152s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:02:39.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:41.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:43.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:45.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:47.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:49.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:51.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:53.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:55.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:57.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 7m9.536s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 5m40.004s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 5m17.153s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:02:59.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:01.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:03.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:05.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:07.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:09.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:11.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:13.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:15.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:17.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 7m29.538s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 6m0.006s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 5m37.155s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:03:19.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:21.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:23.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:25.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:27.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 7m49.54s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 6m20.008s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 5m57.157s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 8m9.566s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 6m40.034s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 6m17.183s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 8m29.572s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 7m0.039s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 6m37.188s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 8m49.574s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 7m20.041s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 6m57.191s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 9m9.576s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 7m40.043s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 7m17.193s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 9m29.578s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 8m0.046s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 7m37.195s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 9m49.58s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 8m20.048s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 7m57.197s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 10m9.582s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 8m40.049s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 8m17.199s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 10m29.583s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 9m0.051s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 8m37.2s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 10m49.585s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 9m20.052s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 8m57.202s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 11m9.588s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 9m40.055s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 9m17.205s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 11m29.59s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 10m0.057s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 9m37.207s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 11m49.592s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 10m20.059s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 9m57.209s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 12m9.65s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 10m40.118s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 10m17.267s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:08:14.014: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:15.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:17.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 12m29.652s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 11m0.12s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 10m37.269s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:08:19.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:21.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:23.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:25.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:27.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:29.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:31.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:33.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:35.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:37.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:39.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 12m50.725s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 11m21.193s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 10m58.342s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:08:41.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:43.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:45.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:47.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:49.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:51.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:53.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:55.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:57.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:59.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 13m10.727s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 11m41.194s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 11m18.344s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:09:01.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:03.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:05.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:07.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:09.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:11.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:13.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:15.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:17.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:19.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 13m30.729s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 12m1.196s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 11m38.346s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:09:21.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:23.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:25.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:27.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:29.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:31.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:33.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:35.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:37.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:39.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 13m50.733s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 12m21.201s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 11m58.35s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:09:41.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:43.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:45.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:47.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:49.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:51.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:53.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:55.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:57.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:59.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 14m10.735s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 12m41.202s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 12m18.352s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:10:01.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:03.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:05.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:07.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:09.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:11.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:13.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:15.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:17.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:19.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 14m30.737s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 13m1.204s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 12m38.354s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:10:21.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:23.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:25.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:27.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:29.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:31.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:33.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:35.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:37.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:39.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 14m50.739s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 13m21.206s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 12m58.356s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:10:41.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:43.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:45.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:47.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:49.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:51.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:53.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:55.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:57.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:59.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 15m10.741s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 13m41.208s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 13m18.357s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:11:01.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:03.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:05.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:07.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:09.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:11.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:13.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:15.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:17.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:19.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 15m30.743s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 14m1.21s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 13m38.36s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:11:21.992: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:23.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:25.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:27.990: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:29.991: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-4887/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 15m50.746s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 14m21.213s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 13m58.362s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 16m10.747s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 14m41.215s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 14m18.364s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 16m30.749s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 15m1.216s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 14m38.366s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #9 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 16m50.752s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 15m21.219s) test/e2e/network/loadbalancer.go:77 At [By Step] waiting for the TCP service to have a load balancer (Step Runtime: 14m58.368s) test/e2e/network/loadbalancer.go:158 Spec Goroutine goroutine 212 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001bcc270, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0xc8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001ba4c00?, 0xc000917b18?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0017156a0?, 0x7fa7740?, 0xc00029ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00420b8b0, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00420b8b0, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:160 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc00092f800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:12:42.034: INFO: Unexpected error: <*fmt.wrapError | 0xc0027a67e0>: { msg: "timed out waiting for service \"mutability-test\" to have a load balancer: timed out waiting for the condition", err: <*errors.errorString | 0xc000295d70>{ s: "timed out waiting for the condition", }, } Nov 27 00:12:42.034: FAIL: timed out waiting for service "mutability-test" to have a load balancer: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:161 +0x9b2 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 27 00:12:43.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 27 00:12:43.868: INFO: Output of kubectl describe svc: Nov 27 00:12:43.868: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.110.108 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-4887 describe svc --namespace=loadbalancers-4887' Nov 27 00:12:44.188: INFO: stderr: "" Nov 27 00:12:44.188: INFO: stdout: "Name: mutability-test\nNamespace: loadbalancers-4887\nLabels: testid=mutability-test-4a15f2ba-a88f-4cf9-96e3-47cb4a120f67\nAnnotations: <none>\nSelector: testid=mutability-test-4a15f2ba-a88f-4cf9-96e3-47cb4a120f67\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.140.27\nIPs: 10.0.140.27\nIP: 35.227.179.144\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nNodePort: <unset> 32737/TCP\nEndpoints: 10.64.2.19:80\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Type 15m service-controller NodePort -> LoadBalancer\n" Nov 27 00:12:44.188: INFO: Name: mutability-test Namespace: loadbalancers-4887 Labels: testid=mutability-test-4a15f2ba-a88f-4cf9-96e3-47cb4a120f67 Annotations: <none> Selector: testid=mutability-test-4a15f2ba-a88f-4cf9-96e3-47cb4a120f67 Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.140.27 IPs: 10.0.140.27 IP: 35.227.179.144 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 32737/TCP Endpoints: 10.64.2.19:80 Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Type 15m service-controller NodePort -> LoadBalancer [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/27/22 00:12:44.189 STEP: Collecting events from namespace "loadbalancers-4887". 11/27/22 00:12:44.189 STEP: Found 9 events. 11/27/22 00:12:44.239 Nov 27 00:12:44.239: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for mutability-test-cjthc: { } Scheduled: Successfully assigned loadbalancers-4887/mutability-test-cjthc to bootstrap-e2e-minion-group-rbx5 Nov 27 00:12:44.239: INFO: At 2022-11-26 23:57:19 +0000 UTC - event for mutability-test: {replication-controller } SuccessfulCreate: Created pod: mutability-test-cjthc Nov 27 00:12:44.239: INFO: At 2022-11-26 23:57:33 +0000 UTC - event for mutability-test-cjthc: {kubelet bootstrap-e2e-minion-group-rbx5} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 27 00:12:44.239: INFO: At 2022-11-26 23:57:33 +0000 UTC - event for mutability-test-cjthc: {kubelet bootstrap-e2e-minion-group-rbx5} Created: Created container netexec Nov 27 00:12:44.239: INFO: At 2022-11-26 23:57:33 +0000 UTC - event for mutability-test-cjthc: {kubelet bootstrap-e2e-minion-group-rbx5} Started: Started container netexec Nov 27 00:12:44.239: INFO: At 2022-11-26 23:57:41 +0000 UTC - event for mutability-test: {service-controller } Type: NodePort -> LoadBalancer Nov 27 00:12:44.239: INFO: At 2022-11-26 23:58:58 +0000 UTC - event for mutability-test-cjthc: {kubelet bootstrap-e2e-minion-group-rbx5} Killing: Stopping container netexec Nov 27 00:12:44.239: INFO: At 2022-11-26 23:58:58 +0000 UTC - event for mutability-test-cjthc: {kubelet bootstrap-e2e-minion-group-rbx5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 27 00:12:44.239: INFO: At 2022-11-27 00:00:11 +0000 UTC - event for mutability-test-cjthc: {kubelet bootstrap-e2e-minion-group-rbx5} BackOff: Back-off restarting failed container netexec in pod mutability-test-cjthc_loadbalancers-4887(657be205-8e09-4dba-b9ce-5a4d7d47748d) Nov 27 00:12:44.286: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 00:12:44.286: INFO: mutability-test-cjthc bootstrap-e2e-minion-group-rbx5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-27 00:10:28 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-27 00:10:28 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 23:57:31 +0000 UTC }] Nov 27 00:12:44.287: INFO: Nov 27 00:12:44.330: INFO: Unable to fetch loadbalancers-4887/mutability-test-cjthc/netexec logs: an error on the server ("unknown") has prevented the request from succeeding (get pods mutability-test-cjthc) Nov 27 00:12:44.376: INFO: Logging node info for node bootstrap-e2e-master Nov 27 00:12:44.417: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 05f7f9a2-a79e-4352-8a79-75844a59633a 4121 0 2022-11-26 23:53:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 23:54:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 23:54:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-27 00:11:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:54:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.110.108,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0fdb3cfe29f66637553465718381a2f8,SystemUUID:0fdb3cfe-29f6-6637-5534-65718381a2f8,BootID:21ea7dd3-945c-4bf1-ab0c-68e321320196,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 00:12:44.417: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 27 00:12:44.460: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 27 00:12:44.502: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 27 00:12:44.502: INFO: Logging node info for node bootstrap-e2e-minion-group-1tnv Nov 27 00:12:44.543: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-1tnv 846ba36d-94f6-4e94-b203-fd107e853327 4259 0 2022-11-26 23:53:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-1tnv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-1tnv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-2574":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-4384":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-715":"bootstrap-e2e-minion-group-1tnv","csi-hostpath-provisioning-9757":"bootstrap-e2e-minion-group-1tnv","csi-mock-csi-mock-volumes-8953":"csi-mock-csi-mock-volumes-8953"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 23:53:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 23:58:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-27 00:11:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-27 00:12:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-1tnv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:11:36 +0000 UTC,LastTransitionTime:2022-11-26 23:53:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.94.215,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-1tnv.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-1tnv.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b50e399c30b5a1961e4b37e487500979,SystemUUID:b50e399c-30b5-a196-1e4b-37e487500979,BootID:049b80da-b98b-4e50-9ce2-87280edcdc78,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-2574^f8338ec6-6de5-11ed-85b7-e62276dae122 kubernetes.io/csi/csi-hostpath-provisioning-2870^fe43d939-6de5-11ed-aaa2-5eafa0978347 kubernetes.io/csi/csi-hostpath-provisioning-715^f80dfac9-6de5-11ed-8ee2-5697939b789b],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-2027^1fb4840f-6de6-11ed-b41a-96c9bb8b92a9,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-2739^2a7e0a80-6de6-11ed-83a6-224e25ee64d6,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2870^fe43d939-6de5-11ed-aaa2-5eafa0978347,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-715^f80dfac9-6de5-11ed-8ee2-5697939b789b,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2574^f8338ec6-6de5-11ed-85b7-e62276dae122,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-1004^f7751827-6de5-11ed-a986-3af9dba8f2ba,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9757^1e551df0-6de6-11ed-acef-92a46bd148c0,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4384^1da125f1-6de6-11ed-9f92-ce37b6f7d123,DevicePath:,},},Config:nil,},} Nov 27 00:12:44.544: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-1tnv Nov 27 00:12:44.586: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-1tnv Nov 27 00:12:44.699: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-1tnv: error trying to reach service: No agent available Nov 27 00:12:44.699: INFO: Logging node info for node bootstrap-e2e-minion-group-2qlj Nov 27 00:12:44.742: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-2qlj 2a4622eb-989e-4a24-9c67-05b1d3225d2a 4181 0 2022-11-26 23:53:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-2qlj kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 23:53:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 23:54:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-27 00:11:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-27 00:11:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-2qlj,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-27 00:11:39 +0000 UTC,LastTransitionTime:2022-11-26 23:53:57 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:53 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:53 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:53 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:11:38 +0000 UTC,LastTransitionTime:2022-11-26 23:53:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.127.85.27,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-2qlj.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-2qlj.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ff92829536287513f2f9166c8337ac18,SystemUUID:ff928295-3628-7513-f2f9-166c8337ac18,BootID:f8a0168e-4102-47f3-bee3-ed410278cdb0,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 00:12:44.742: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-2qlj Nov 27 00:12:44.789: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-2qlj Nov 27 00:12:44.836: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-2qlj: error trying to reach service: No agent available Nov 27 00:12:44.836: INFO: Logging node info for node bootstrap-e2e-minion-group-rbx5 Nov 27 00:12:44.880: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-rbx5 71a3da10-2d41-41fe-9331-fb855b0bb42f 4174 0 2022-11-26 23:53:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-rbx5 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-rbx5 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 23:53:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 23:53:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 23:54:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-27 00:11:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-27 00:11:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-rbx5,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-27 00:11:37 +0000 UTC,LastTransitionTime:2022-11-26 23:53:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 23:54:05 +0000 UTC,LastTransitionTime:2022-11-26 23:54:05 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-27 00:11:33 +0000 UTC,LastTransitionTime:2022-11-26 23:53:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.203.146.23,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-rbx5.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-rbx5.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3e37064ccccf109daf185b691fbc1e81,SystemUUID:3e37064c-cccf-109d-af18-5b691fbc1e81,BootID:aa4950bb-b649-4cf2-8496-bbb949f31f9b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 00:12:44.880: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-rbx5 Nov 27 00:12:44.928: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-rbx5 Nov 27 00:12:44.977: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-rbx5: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-4887" for this suite. 11/27/22 00:12:44.977
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sUDP\sservice\s\[Slow\]$'
test/e2e/network/loadbalancer.go:382 k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:382 +0x978
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 23:55:49.77 Nov 26 23:55:49.770: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 23:55:49.772 Nov 26 23:55:49.811: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:55:51.851: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:55:53.851: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:55:55.852: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:55:57.851: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:55:59.851: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:01.850: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:03.851: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:05.851: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:07.851: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:09.851: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:11.851: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:13.850: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:15.851: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused Nov 26 23:56:17.851: INFO: Unexpected error while creating namespace: Post "https://34.83.110.108/api/v1/namespaces": dial tcp 34.83.110.108:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 23:57:18.609 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 23:57:18.699 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to change the type and ports of a UDP service [Slow] test/e2e/network/loadbalancer.go:287 Nov 26 23:57:18.911: INFO: namespace for TCP test: loadbalancers-6808 STEP: creating a UDP service mutability-test with type=ClusterIP in namespace loadbalancers-6808 11/26/22 23:57:18.955 Nov 26 23:57:19.017: INFO: service port UDP: 80 STEP: creating a pod to be part of the UDP service mutability-test 11/26/22 23:57:19.017 Nov 26 23:57:19.067: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 23:57:19.120: INFO: Found 0/1 pods - will retry Nov 26 23:57:21.171: INFO: Found all 1 pods Nov 26 23:57:21.171: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [mutability-test-mz5vr] Nov 26 23:57:21.171: INFO: Waiting up to 2m0s for pod "mutability-test-mz5vr" in namespace "loadbalancers-6808" to be "running and ready" Nov 26 23:57:21.228: INFO: Pod "mutability-test-mz5vr": Phase="Pending", Reason="", readiness=false. Elapsed: 56.641998ms Nov 26 23:57:21.228: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-mz5vr' on '' to be 'Running' but was 'Pending' Nov 26 23:57:23.275: INFO: Pod "mutability-test-mz5vr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103333633s Nov 26 23:57:23.275: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-mz5vr' on '' to be 'Running' but was 'Pending' Nov 26 23:57:25.290: INFO: Pod "mutability-test-mz5vr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118450325s Nov 26 23:57:25.290: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-mz5vr' on '' to be 'Running' but was 'Pending' Nov 26 23:57:27.308: INFO: Pod "mutability-test-mz5vr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136721818s Nov 26 23:57:27.308: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-mz5vr' on '' to be 'Running' but was 'Pending' Nov 26 23:57:29.280: INFO: Pod "mutability-test-mz5vr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10847268s Nov 26 23:57:29.280: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-mz5vr' on '' to be 'Running' but was 'Pending' Nov 26 23:57:31.295: INFO: Pod "mutability-test-mz5vr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.124005376s Nov 26 23:57:31.296: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-mz5vr' on '' to be 'Running' but was 'Pending' Nov 26 23:57:33.312: INFO: Pod "mutability-test-mz5vr": Phase="Pending", Reason="", readiness=false. Elapsed: 12.140529404s Nov 26 23:57:33.312: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-mz5vr' on 'bootstrap-e2e-minion-group-rbx5' to be 'Running' but was 'Pending' Nov 26 23:57:35.289: INFO: Pod "mutability-test-mz5vr": Phase="Pending", Reason="", readiness=false. Elapsed: 14.117648445s Nov 26 23:57:35.289: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-mz5vr' on 'bootstrap-e2e-minion-group-rbx5' to be 'Running' but was 'Pending' Nov 26 23:57:37.295: INFO: Pod "mutability-test-mz5vr": Phase="Running", Reason="", readiness=true. Elapsed: 16.1230664s Nov 26 23:57:37.295: INFO: Pod "mutability-test-mz5vr" satisfied condition "running and ready" Nov 26 23:57:37.295: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [mutability-test-mz5vr] STEP: changing the UDP service to type=NodePort 11/26/22 23:57:37.295 Nov 26 23:57:37.393: INFO: UDP node port: 32190 STEP: hitting the UDP service's NodePort 11/26/22 23:57:37.393 Nov 26 23:57:37.393: INFO: Poking udp://34.83.94.215:32190 Nov 26 23:57:37.433: INFO: Poke("udp://34.83.94.215:32190"): read udp 10.60.99.169:49288->34.83.94.215:32190: read: connection refused Nov 26 23:57:39.434: INFO: Poking udp://34.83.94.215:32190 Nov 26 23:57:39.475: INFO: Poke("udp://34.83.94.215:32190"): success STEP: creating a static load balancer IP 11/26/22 23:57:39.475 Nov 26 23:57:41.506: INFO: Allocated static load balancer IP: 35.185.230.152 STEP: changing the UDP service to type=LoadBalancer 11/26/22 23:57:41.506 STEP: demoting the static IP to ephemeral 11/26/22 23:57:41.602 STEP: waiting for the UDP service to have a load balancer 11/26/22 23:57:43.37 Nov 26 23:57:43.370: INFO: Waiting up to 15m0s for service "mutability-test" to have a LoadBalancer Nov 27 00:00:17.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:19.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:21.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:23.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:25.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:27.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:29.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:31.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:33.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:35.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:37.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:39.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:41.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:43.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:45.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:47.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:49.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:51.454: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:53.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:55.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:57.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:00:59.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:11.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:13.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:15.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:17.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 6m29.096s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 5m0s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 4m35.496s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:02:19.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:21.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:23.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:25.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:27.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:29.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:31.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:33.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:35.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:37.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 6m49.098s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 5m20.002s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 4m55.498s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:02:39.449: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:41.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:43.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:45.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:47.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:49.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:51.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:53.449: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:55.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:02:57.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 7m9.1s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 5m40.004s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 5m15.5s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:02:59.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:01.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:03.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:05.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:07.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:09.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:11.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:13.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:15.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:17.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 7m29.102s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 6m0.006s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 5m35.502s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:03:19.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:21.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:23.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:25.449: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:03:27.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 7m49.11s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 6m20.015s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 5m55.511s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 8m9.114s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 6m40.019s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 6m15.514s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 8m29.116s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 7m0.02s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 6m35.516s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 8m49.118s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 7m20.022s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 6m55.518s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 9m9.12s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 7m40.024s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 7m15.52s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 9m29.122s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 8m0.026s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 7m35.522s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 9m49.123s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 8m20.028s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 7m55.523s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 10m9.126s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 8m40.03s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 8m15.526s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 10m29.132s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 9m0.036s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 8m35.532s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 10m49.134s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 9m20.038s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 8m55.534s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 11m9.136s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 9m40.041s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 9m15.536s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 11m29.138s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 10m0.043s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 9m35.538s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 11m49.14s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 10m20.045s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 9m55.54s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 12m9.142s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 10m40.047s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 10m15.542s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:08:15.449: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:17.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 12m29.145s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 11m0.049s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 10m35.545s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:08:19.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:21.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:23.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:25.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:27.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:29.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:31.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:33.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:35.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:37.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:39.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 12m50.47s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 11m21.374s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 10m56.87s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:08:41.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:43.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:45.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:47.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:49.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:51.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:53.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:55.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:57.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:08:59.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 13m10.472s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 11m41.376s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 11m16.872s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:09:01.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:03.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:05.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:07.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:09.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:11.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:13.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:15.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:17.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:19.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 13m30.474s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 12m1.378s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 11m36.874s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:09:21.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:23.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:25.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:27.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:29.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:31.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:33.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:35.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:37.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:39.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 13m50.476s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 12m21.38s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 11m56.876s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:09:41.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:43.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:45.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:47.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:49.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:51.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:53.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:55.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:57.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:09:59.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 14m10.478s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 12m41.382s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 12m16.878s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:10:01.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:03.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:05.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:07.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:09.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:11.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:13.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:15.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:17.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:19.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 14m30.48s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 13m1.384s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 12m36.88s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:10:21.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:23.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:25.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:27.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:29.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:31.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:33.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:35.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:37.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:39.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 14m50.482s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 13m21.387s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 12m56.882s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:10:41.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:43.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:45.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:47.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:49.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:51.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:53.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:55.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:57.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:10:59.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 15m10.485s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 13m41.389s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 13m16.885s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:11:01.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:03.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:05.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:07.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:09.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:11.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:13.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:15.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:17.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:19.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 15m30.486s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 14m1.391s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 13m36.886s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:11:21.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:23.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:25.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:27.451: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused Nov 27 00:11:29.450: INFO: Retrying .... error trying to get Service mutability-test: Get "https://34.83.110.108/api/v1/namespaces/loadbalancers-6808/services/mutability-test": dial tcp 34.83.110.108:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 15m50.488s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 14m21.392s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 13m56.888s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 16m10.491s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 14m41.396s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 14m16.891s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 16m30.494s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 15m1.398s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 14m36.894s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #2 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow] (Spec Runtime: 16m50.496s) test/e2e/network/loadbalancer.go:287 In [It] (Node Runtime: 15m21.4s) test/e2e/network/loadbalancer.go:287 At [By Step] waiting for the UDP service to have a load balancer (Step Runtime: 14m56.896s) test/e2e/network/loadbalancer.go:379 Spec Goroutine goroutine 527 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0005ac1c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x68?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fa2b40?, 0xc001c9bbb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0015308a0?, 0x7fa7740?, 0xc00024a5c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc002dccd70, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc002dccd70, 0x33?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:381 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0033f6180}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 27 00:12:43.493: INFO: Unexpected error: <*fmt.wrapError | 0xc00356ea40>: { msg: "timed out waiting for service \"mutability-test\" to have a load balancer: timed out waiting for the condition", err: <*errors.errorString | 0xc000249980>{ s: "timed out waiting for the condition", }, } Nov 27 00:12:43.493: FAIL: timed out waiting for service "mutability-test" to have a load balancer: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.4() test/e2e/network/loadbalancer.go:382 +0x978 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 27 00:12:43.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 27 00:12:43.576: INFO: Output of kubectl describe svc: Nov 27 00:12:43.576: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.110.108 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6808 describe svc --namespace=loadbalancers-6808' Nov 27 00:12:43.891: INFO: stderr: "" Nov 27 00:12:43.891: INFO: stdout: "Name: mutability-test\nNamespace: loadbalancers-6808\nLabels: testid=mutability-test-dcbe3483-4d85-43e8-8183-0e940a9d56e4\nAnnotations: <none>\nSelector: testid=mutability-test-dcbe3483-4d85-43e8-8183-0e940a9d56e4\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.111.96\nIPs: 10.0.111.96\nPort: <unset> 80/UDP\nTargetPort: 80/UDP\nNodePort: <unset> 32190/UDP\nEndpoints: 10.64.2.21:80\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Type 15m service-controller NodePort -> LoadBalancer\n" Nov 27 00:12:43.891: INFO: Name: mutability-test Namespace: loadbalancers-6808 Labels: testid=mutability-test-dcbe3483-4d85-43e8-8183-0e940a9d56e4 Annotations: <none> Selector: testid=mutability-test-dcbe3483-4d85-43e8-8183-0e940a9d56e4 Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.111.96 IPs: 10.0.111.96 Port: <unset> 80/UDP TargetPort: 80/UDP NodePort: <unset> 32190/UDP Endpoints: 10.64.2.21:80 Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Type 15m service-controller NodePort -> LoadBalancer [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/27/22 00:12:43.891 STEP: Collecting events from namespace "loadbalancers-6808". 11/27/22 00:12:43.891 STEP: Found 11 events. 11/27/22 00:12:43.933 Nov 27 00:12:43.933: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for mutability-test-mz5vr: { } Scheduled: Successfully assigned loadbalancers-6808/mutability-test-mz5vr to bootstrap-e2e-minion-group-rbx5 Nov 27 00:12:43.933: INFO: At 2022-11-26 23:57:19 +0000 UTC - event for mutability-test: {replication-controller } SuccessfulCreate: Created pod: mutability-test-mz5vr Nov 27 00:12:43.933: INFO: At 2022-11-26 23:57:33 +0000 UTC - event for mutability-test-mz5vr: {kubelet bootstrap-e2e-minion-group-rbx5} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-nmlkr" : failed to sync configmap cache: timed out waiting for the condition Nov 27 00:12:43.933: INFO: At 2022-11-26 23:57:35 +0000 UTC - event for mutability-test-mz5vr: {kubelet bootstrap-e2e-minion-group-rbx5} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 27 00:12:43.933: INFO: At 2022-11-26 23:57:35 +0000 UTC - event for mutability-test-mz5vr: {kubelet bootstrap-e2e-minion-group-rbx5} Created: Created container netexec Nov 27 00:12:43.933: INFO: At 2022-11-26 23:57:35 +0000 UTC - event for mutability-test-mz5vr: {kubelet bootstrap-e2e-minion-group-rbx5} Started: Started container netexec Nov 27 00:12:43.933: INFO: At 2022-11-26 23:57:41 +0000 UTC - event for mutability-test: {service-controller } Type: NodePort -> LoadBalancer Nov 27 00:12:43.933: INFO: At 2022-11-26 23:58:42 +0000 UTC - event for mutability-test-mz5vr: {kubelet bootstrap-e2e-minion-group-rbx5} Killing: Stopping container netexec Nov 27 00:12:43.933: INFO: At 2022-11-26 23:58:42 +0000 UTC - event for mutability-test-mz5vr: {kubelet bootstrap-e2e-minion-group-rbx5} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 27 00:12:43.933: INFO: At 2022-11-27 00:00:06 +0000 UTC - event for mutability-test-mz5vr: {kubelet bootstrap-e2e-minion-group-rbx5} BackOff: Back-off restarting failed container netexec in pod mutability-test-mz5vr_loadbalancers-6808(93b595be-b60d-4b09-931d-f6436a5ae33a) Nov 27 00:12:43.933: INFO: At 2022-11-27 00:00:25 +0000 UTC -