go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sServers\swith\ssupport\sfor\sAPI\schunking\sshould\ssupport\scontinue\slisting\sfrom\sthe\slast\skey\sif\sthe\soriginal\sversion\shas\sbeen\scompacted\saway\,\sthough\sthe\slist\sis\sinconsistent\s\[Slow\]$'
test/e2e/apimachinery/chunking.go:177 k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:177 +0x7fc There were additional failures detected after the initial failure: [FAILED] Nov 25 18:59:58.825: failed to list events in namespace "chunking-2732": Get "https://104.198.13.163/api/v1/namespaces/chunking-2732/events": dial tcp 104.198.13.163:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:59:58.866: Couldn't delete ns: "chunking-2732": Delete "https://104.198.13.163/api/v1/namespaces/chunking-2732": dial tcp 104.198.13.163:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://104.198.13.163/api/v1/namespaces/chunking-2732", Err:(*net.OpError)(0xc0022cc000)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-api-machinery] Servers with support for API chunking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:58:20.678 Nov 25 18:58:20.679: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename chunking 11/25/22 18:58:20.68 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:58:20.872 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:58:20.953 [BeforeEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-api-machinery] Servers with support for API chunking test/e2e/apimachinery/chunking.go:51 STEP: creating a large number of resources 11/25/22 18:58:21.07 [It] should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow] test/e2e/apimachinery/chunking.go:126 STEP: retrieving the first page 11/25/22 18:58:38.614 Nov 25 18:58:38.664: INFO: Retrieved 40/40 results with rv 1817 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTgxNywic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 STEP: retrieving the second page until the token expires 11/25/22 18:58:38.665 Nov 25 18:58:58.714: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTgxNywic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 25 18:59:18.713: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTgxNywic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 25 18:59:38.712: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTgxNywic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet STEP: retrieving the second page again with the token received with the error message 11/25/22 18:59:58.705 Nov 25 18:59:58.745: INFO: Unexpected error: failed to list pod templates in namespace: chunking-2732, given inconsistent continue token and limit: 40: <*url.Error | 0xc0020789c0>: { Op: "Get", URL: "https://104.198.13.163/api/v1/namespaces/chunking-2732/podtemplates?limit=40", Err: <*net.OpError | 0xc003253d10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001f587e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 13, 163], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0016a5b20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:59:58.745: FAIL: failed to list pod templates in namespace: chunking-2732, given inconsistent continue token and limit: 40: Get "https://104.198.13.163/api/v1/namespaces/chunking-2732/podtemplates?limit=40": dial tcp 104.198.13.163:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:177 +0x7fc [AfterEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/node/init/init.go:32 Nov 25 18:59:58.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:59:58.786 STEP: Collecting events from namespace "chunking-2732". 11/25/22 18:59:58.786 Nov 25 18:59:58.825: INFO: Unexpected error: failed to list events in namespace "chunking-2732": <*url.Error | 0xc001eb54a0>: { Op: "Get", URL: "https://104.198.13.163/api/v1/namespaces/chunking-2732/events", Err: <*net.OpError | 0xc0031656d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001f58db0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 13, 163], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00108a480>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:59:58.825: FAIL: failed to list events in namespace "chunking-2732": Get "https://104.198.13.163/api/v1/namespaces/chunking-2732/events": dial tcp 104.198.13.163:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc002e385c0, {0xc0034052f0, 0xd}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0030d8680}, {0xc0034052f0, 0xd}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc002e38650?, {0xc0034052f0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00042b0e0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000b5bec0?, 0xc001136fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc0031f6088?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000b5bec0?, 0x29449fc?}, {0xae73300?, 0xc001136f80?, 0x2fdb5c0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking tear down framework | framework.go:193 STEP: Destroying namespace "chunking-2732" for this suite. 11/25/22 18:59:58.826 Nov 25 18:59:58.866: FAIL: Couldn't delete ns: "chunking-2732": Delete "https://104.198.13.163/api/v1/namespaces/chunking-2732": dial tcp 104.198.13.163:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://104.198.13.163/api/v1/namespaces/chunking-2732", Err:(*net.OpError)(0xc0022cc000)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00042b0e0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000b5be00?, 0x687420656c696877?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x646f702077656e20?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000b5be00?, 0x657275746165465b?}, {0xae73300?, 0x616552746e756f4d?, 0x6e4f657469725764?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sBurst\sscaling\sshould\srun\sto\scompletion\seven\swith\sunhealthy\spods\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/statefulset/rest.go:69 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc0024e6820}, 0xc00367ea00) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/apps.confirmStatefulPodCount({0x801de88, 0xc0024e6820}, 0x0, 0xc00367ea00, 0x0?, 0x0) test/e2e/apps/statefulset.go:1672 +0xcd k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:726 +0x491 There were additional failures detected after the initial failure: [FAILED] Nov 25 19:10:52.662: Get "https://104.198.13.163/apis/apps/v1/namespaces/statefulset-7863/statefulsets": dial tcp 104.198.13.163:443: connect: connection refused In [AfterEach] at: test/e2e/framework/statefulset/rest.go:76 ---------- [FAILED] Nov 25 19:10:52.741: failed to list events in namespace "statefulset-7863": Get "https://104.198.13.163/api/v1/namespaces/statefulset-7863/events": dial tcp 104.198.13.163:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 19:10:52.782: Couldn't delete ns: "statefulset-7863": Delete "https://104.198.13.163/api/v1/namespaces/statefulset-7863": dial tcp 104.198.13.163:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://104.198.13.163/api/v1/namespaces/statefulset-7863", Err:(*net.OpError)(0xc0047f5680)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 19:09:14.124 Nov 25 19:09:14.124: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/25/22 19:09:14.126 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 19:09:14.328 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 19:09:14.411 [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:98 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:113 STEP: Creating service test in namespace statefulset-7863 11/25/22 19:09:14.512 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] test/e2e/apps/statefulset.go:697 STEP: Creating stateful set ss in namespace statefulset-7863 11/25/22 19:09:14.583 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7863 11/25/22 19:09:14.649 Nov 25 19:09:14.724: INFO: Found 0 stateful pods, waiting for 1 Nov 25 19:09:24.823: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod 11/25/22 19:09:24.823 Nov 25 19:09:24.876: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7863 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 19:09:25.357: INFO: rc: 1 Nov 25 19:09:25.357: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7863 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 19:09:35.358: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7863 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 19:09:35.873: INFO: rc: 1 Nov 25 19:09:35.873: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7863 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 19:09:45.873: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7863 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 19:09:46.338: INFO: rc: 1 Nov 25 19:09:46.338: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7863 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 19:09:56.339: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7863 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 19:09:56.815: INFO: rc: 1 Nov 25 19:09:56.815: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7863 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 19:10:06.816: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7863 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 19:10:07.637: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 25 19:10:07.637: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 25 19:10:07.637: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 25 19:10:07.728: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 25 19:10:17.860: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 25 19:10:17.860: INFO: Waiting for statefulset status.replicas updated to 0 Nov 25 19:10:18.127: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 19:10:18.127: INFO: ss-0 bootstrap-e2e-minion-group-ft5h Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:09:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:09:14 +0000 UTC }] Nov 25 19:10:18.127: INFO: ss-1 Pending [] Nov 25 19:10:18.127: INFO: Nov 25 19:10:18.127: INFO: StatefulSet ss has not reached scale 3, at 2 Nov 25 19:10:19.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.930266042s Nov 25 19:10:20.233: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.882908689s Nov 25 19:10:21.299: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.824083216s Nov 25 19:10:22.349: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.759069012s Nov 25 19:10:23.437: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.708753602s Nov 25 19:10:24.484: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.620914314s Nov 25 19:10:25.550: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.573219118s Nov 25 19:10:26.632: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.506802624s Nov 25 19:10:27.719: INFO: Verifying statefulset ss doesn't scale past 3 for another 425.588627ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7863 11/25/22 19:10:28.72 Nov 25 19:10:28.774: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7863 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 19:10:29.586: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 25 19:10:29.586: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 25 19:10:29.586: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 25 19:10:29.587: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7863 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 19:10:30.621: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Nov 25 19:10:30.622: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 25 19:10:30.622: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 25 19:10:30.622: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7863 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 19:10:31.281: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Nov 25 19:10:31.281: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 25 19:10:31.281: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 25 19:10:31.334: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 25 19:10:31.334: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 25 19:10:31.334: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod 11/25/22 19:10:31.334 Nov 25 19:10:31.395: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7863 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 19:10:32.192: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 25 19:10:32.192: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 25 19:10:32.192: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 25 19:10:32.192: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7863 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 19:10:32.878: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 25 19:10:32.879: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 25 19:10:32.879: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 25 19:10:32.879: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7863 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 19:10:33.652: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 25 19:10:33.652: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 25 19:10:33.652: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 25 19:10:33.652: INFO: Waiting for statefulset status.replicas updated to 0 Nov 25 19:10:33.735: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 25 19:10:43.897: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 25 19:10:43.897: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Nov 25 19:10:43.897: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Nov 25 19:10:44.079: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 19:10:44.079: INFO: ss-0 bootstrap-e2e-minion-group-ft5h Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:09:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:09:14 +0000 UTC }] Nov 25 19:10:44.079: INFO: ss-1 bootstrap-e2e-minion-group-p8wv Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:18 +0000 UTC }] Nov 25 19:10:44.079: INFO: ss-2 bootstrap-e2e-minion-group-rvwg Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:18 +0000 UTC }] Nov 25 19:10:44.079: INFO: Nov 25 19:10:44.079: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 25 19:10:45.136: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 19:10:45.136: INFO: ss-2 bootstrap-e2e-minion-group-rvwg Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:10:18 +0000 UTC }] Nov 25 19:10:45.136: INFO: Nov 25 19:10:45.136: INFO: StatefulSet ss has not reached scale 0, at 1 Nov 25 19:10:46.204: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.872630562s Nov 25 19:10:47.280: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.804730019s Nov 25 19:10:48.338: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.728614302s Nov 25 19:10:49.393: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.669911586s Nov 25 19:10:50.456: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.615371277s Nov 25 19:10:51.542: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.55302617s Nov 25 19:10:52.583: INFO: Unexpected error: <*url.Error | 0xc00499fa70>: { Op: "Get", URL: "https://104.198.13.163/api/v1/namespaces/statefulset-7863/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar", Err: <*net.OpError | 0xc0034a19a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003f4ac60>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 13, 163], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001964080>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 19:10:52.583: FAIL: Get "https://104.198.13.163/api/v1/namespaces/statefulset-7863/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar": dial tcp 104.198.13.163:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc0024e6820}, 0xc00367ea00) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/apps.confirmStatefulPodCount({0x801de88, 0xc0024e6820}, 0x0, 0xc00367ea00, 0x0?, 0x0) test/e2e/apps/statefulset.go:1672 +0xcd k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:726 +0x491 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Nov 25 19:10:52.622: INFO: Deleting all statefulset in ns statefulset-7863 Nov 25 19:10:52.662: INFO: Unexpected error: <*url.Error | 0xc00499ff20>: { Op: "Get", URL: "https://104.198.13.163/apis/apps/v1/namespaces/statefulset-7863/statefulsets", Err: <*net.OpError | 0xc0034a1bd0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004b50c90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 13, 163], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0019643e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 19:10:52.662: FAIL: Get "https://104.198.13.163/apis/apps/v1/namespaces/statefulset-7863/statefulsets": dial tcp 104.198.13.163:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x801de88, 0xc0024e6820}, {0xc001121ee0, 0x10}) test/e2e/framework/statefulset/rest.go:76 +0x113 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:129 +0x1b2 [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 25 19:10:52.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 19:10:52.702 STEP: Collecting events from namespace "statefulset-7863". 11/25/22 19:10:52.702 Nov 25 19:10:52.741: INFO: Unexpected error: failed to list events in namespace "statefulset-7863": <*url.Error | 0xc004c48c30>: { Op: "Get", URL: "https://104.198.13.163/api/v1/namespaces/statefulset-7863/events", Err: <*net.OpError | 0xc0034a1e50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003f4b350>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 13, 163], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001964740>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 19:10:52.741: FAIL: failed to list events in namespace "statefulset-7863": Get "https://104.198.13.163/api/v1/namespaces/statefulset-7863/events": dial tcp 104.198.13.163:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc001ece5c0, {0xc001121ee0, 0x10}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0024e6820}, {0xc001121ee0, 0x10}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001ece650?, {0xc001121ee0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000f061e0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0011a0a60?, 0xc00188efb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc00425a3c8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0011a0a60?, 0x29449fc?}, {0xae73300?, 0xc00188ef80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 STEP: Destroying namespace "statefulset-7863" for this suite. 11/25/22 19:10:52.742 Nov 25 19:10:52.782: FAIL: Couldn't delete ns: "statefulset-7863": Delete "https://104.198.13.163/api/v1/namespaces/statefulset-7863": dial tcp 104.198.13.163:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://104.198.13.163/api/v1/namespaces/statefulset-7863", Err:(*net.OpError)(0xc0047f5680)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000f061e0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0011a0960?, 0xc004c50fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0011a0960?, 0x0?}, {0xae73300?, 0x5?, 0xc0006ffe60?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sScaling\sshould\shappen\sin\spredictable\sorder\sand\shalt\sif\sany\sstateful\spod\sis\sunhealthy\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/statefulset/rest.go:69 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc002fdc000}, 0xc003182a00) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00349a048, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc0039f5de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc002fdc000?, 0xc0039f5e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc002fdc000}, 0x3, 0x3, 0xc003182a00) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 +0x6d0 There were additional failures detected after the initial failure: [FAILED] Nov 25 19:10:58.899: Get "https://104.198.13.163/apis/apps/v1/namespaces/statefulset-7733/statefulsets": dial tcp 104.198.13.163:443: connect: connection refused In [AfterEach] at: test/e2e/framework/statefulset/rest.go:76 ---------- [FAILED] Nov 25 19:10:58.979: failed to list events in namespace "statefulset-7733": Get "https://104.198.13.163/api/v1/namespaces/statefulset-7733/events": dial tcp 104.198.13.163:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 19:10:59.019: Couldn't delete ns: "statefulset-7733": Delete "https://104.198.13.163/api/v1/namespaces/statefulset-7733": dial tcp 104.198.13.163:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://104.198.13.163/api/v1/namespaces/statefulset-7733", Err:(*net.OpError)(0xc002ee54f0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 19:09:23.373 Nov 25 19:09:23.373: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/25/22 19:09:23.374 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 19:09:23.548 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 19:09:23.637 [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:98 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:113 STEP: Creating service test in namespace statefulset-7733 11/25/22 19:09:23.73 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] test/e2e/apps/statefulset.go:587 STEP: Initializing watcher for selector baz=blah,foo=bar 11/25/22 19:09:23.791 STEP: Creating stateful set ss in namespace statefulset-7733 11/25/22 19:09:23.861 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7733 11/25/22 19:09:23.932 Nov 25 19:09:23.994: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 25 19:09:34.050: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod 11/25/22 19:09:34.051 Nov 25 19:09:34.110: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7733 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 19:09:34.553: INFO: rc: 1 Nov 25 19:09:34.553: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7733 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 19:09:44.553: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7733 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 19:09:45.024: INFO: rc: 1 Nov 25 19:09:45.024: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7733 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 19:09:55.025: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7733 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 19:09:55.587: INFO: rc: 1 Nov 25 19:09:55.587: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7733 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 19:10:05.588: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7733 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 19:10:06.468: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 25 19:10:06.468: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 25 19:10:06.468: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 25 19:10:06.622: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 25 19:10:16.694: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 25 19:10:16.694: INFO: Waiting for statefulset status.replicas updated to 0 Nov 25 19:10:17.208: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999108s Nov 25 19:10:18.340: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.867657947s Nov 25 19:10:19.412: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.736494709s Nov 25 19:10:20.488: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.663901463s Nov 25 19:10:21.573: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.588747165s Nov 25 19:10:22.631: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.503685047s Nov 25 19:10:23.701: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.445477812s Nov 25 19:10:24.769: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.375482405s Nov 25 19:10:25.825: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.307224591s Nov 25 19:10:26.918: INFO: Verifying statefulset ss doesn't scale past 1 for another 250.345512ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7733 11/25/22 19:10:27.919 Nov 25 19:10:27.983: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=statefulset-7733 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 19:10:28.727: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 25 19:10:28.727: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 25 19:10:28.727: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 25 19:10:28.779: INFO: Found 1 stateful pods, waiting for 3 Nov 25 19:10:38.838: INFO: Found 2 stateful pods, waiting for 3 Nov 25 19:10:48.919: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 19:10:58.819: INFO: Unexpected error: <*url.Error | 0xc004550bd0>: { Op: "Get", URL: "https://104.198.13.163/api/v1/namespaces/statefulset-7733/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar", Err: <*net.OpError | 0xc002ee4e10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004180420>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 13, 163], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0012e44e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 19:10:58.819: FAIL: Get "https://104.198.13.163/api/v1/namespaces/statefulset-7733/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar": dial tcp 104.198.13.163:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc002fdc000}, 0xc003182a00) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00349a048, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc0039f5de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc002fdc000?, 0xc0039f5e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc002fdc000}, 0x3, 0x3, 0xc003182a00) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 +0x6d0 E1125 19:10:58.819935 8123 runtime.go:79] Observed a panic: types.GinkgoError{Heading:"Your Test Panicked", Message:"When you, or your assertion library, calls Ginkgo's Fail(),\nGinkgo panics to prevent subsequent assertions from running.\n\nNormally Ginkgo rescues this panic so you shouldn't see it.\n\nHowever, if you make an assertion in a goroutine, Ginkgo can't capture the panic.\nTo circumvent this, you should call\n\n\tdefer GinkgoRecover()\n\nat the top of the goroutine that caused this panic.\n\nAlternatively, you may have made an assertion outside of a Ginkgo\nleaf node (e.g. in a container node or some out-of-band function) - please move your assertion to\nan appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...).", DocLink:"mental-model-how-ginkgo-handles-failure", CodeLocation:types.CodeLocation{FileName:"test/e2e/framework/statefulset/rest.go", LineNumber:69, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc002fdc000}, 0xc003182a00)\n\ttest/e2e/framework/statefulset/rest.go:69 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1()\n\ttest/e2e/framework/statefulset/wait.go:37 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00349a048, 0x2fdb16a?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc0039f5de0?, 0x262a967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc002fdc000?, 0xc0039f5e20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc002fdc000}, 0x3, 0x3, 0xc003182a00)\n\ttest/e2e/framework/statefulset/wait.go:35 +0xbd\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...)\n\ttest/e2e/framework/statefulset/wait.go:80\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.10()\n\ttest/e2e/apps/statefulset.go:643 +0x6d0", CustomMessage:""}} (�[1m�[38;5;9mYour Test Panicked�[0m �[38;5;243mtest/e2e/framework/statefulset/rest.go:69�[0m When you, or your assertion library, calls Ginkgo's Fail(), Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. However, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. Alternatively, you may have made an assertion outside of a Ginkgo leaf node (e.g. in a container node or some out-of-band function) - please move your assertion to an appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...). �[1mLearn more at:�[0m �[38;5;14m�[4mhttp://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure�[0m ) goroutine 2635 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x70eb7e0?, 0xc000adae00}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000adae00?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x70eb7e0, 0xc000adae00}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc0035ee300, 0xb8}, {0xc00067b540?, 0x75b521a?, 0xc00067b560?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00360c000, 0xa3}, {0xc00067b5d8?, 0xc00360c000?, 0xc00067b600?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fadf60, 0xc004550bd0}, {0x0?, 0xc0015dc020?, 0x10?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc002fdc000}, 0xc003182a00) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00349a048, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc0039f5de0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc002fdc000?, 0xc0039f5e20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc002fdc000}, 0x3, 0x3, 0xc003182a00) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:643 +0x6d0 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002a2b980}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 +0x1b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 +0x98 created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 +0xe3d [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Nov 25 19:10:58.860: INFO: Deleting all statefulset in ns statefulset-7733 Nov 25 19:10:58.899: INFO: Unexpected error: <*url.Error | 0xc00478c060>: { Op: "Get", URL: "https://104.198.13.163/apis/apps/v1/namespaces/statefulset-7733/statefulsets", Err: <*net.OpError | 0xc002ee5130>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0041806f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 13, 163], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0012e4920>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 19:10:58.899: FAIL: Get "https://104.198.13.163/apis/apps/v1/namespaces/statefulset-7733/statefulsets": dial tcp 104.198.13.163:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x801de88, 0xc002fdc000}, {0xc0030d9680, 0x10}) test/e2e/framework/statefulset/rest.go:76 +0x113 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:129 +0x1b2 [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 25 19:10:58.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 19:10:58.939 STEP: Collecting events from namespace "statefulset-7733". 11/25/22 19:10:58.939 Nov 25 19:10:58.979: INFO: Unexpected error: failed to list events in namespace "statefulset-7733": <*url.Error | 0xc004180720>: { Op: "Get", URL: "https://104.198.13.163/api/v1/namespaces/statefulset-7733/events", Err: <*net.OpError | 0xc004d40550>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004dbee70>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 13, 163], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00485e380>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 19:10:58.979: FAIL: failed to list events in namespace "statefulset-7733": Get "https://104.198.13.163/api/v1/namespaces/statefulset-7733/events": dial tcp 104.198.13.163:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00067a5c0, {0xc0030d9680, 0x10}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002fdc000}, {0xc0030d9680, 0x10}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00067a650?, {0xc0030d9680?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0009f01e0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000a5a490?, 0xc004bfbfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc002fdc8a8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000a5a490?, 0x29449fc?}, {0xae73300?, 0xc004bfbf80?, 0xc0036c5008?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 STEP: Destroying namespace "statefulset-7733" for this suite. 11/25/22 19:10:58.979 Nov 25 19:10:59.019: FAIL: Couldn't delete ns: "statefulset-7733": Delete "https://104.198.13.163/api/v1/namespaces/statefulset-7733": dial tcp 104.198.13.163:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://104.198.13.163/api/v1/namespaces/statefulset-7733", Err:(*net.OpError)(0xc002ee54f0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0009f01e0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000a5a340?, 0xc004117fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000a5a340?, 0x0?}, {0xae73300?, 0x5?, 0xc00366e600?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-auth\]\sServiceAccounts\sshould\ssupport\sInClusterConfig\swith\stoken\srotation\s\[Slow\]$'
test/e2e/auth/service_accounts.go:520 k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:520 +0x9abfrom junit_01.xml
[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 19:03:46.362 Nov 25 19:03:46.362: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename svcaccounts 11/25/22 19:03:46.364 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 19:03:46.627 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 19:03:46.73 [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 [It] should support InClusterConfig with token rotation [Slow] test/e2e/auth/service_accounts.go:432 Nov 25 19:03:46.944: INFO: created pod Nov 25 19:03:46.944: INFO: Waiting up to 1m0s for 1 pods to be running and ready: [inclusterclient] Nov 25 19:03:46.944: INFO: Waiting up to 1m0s for pod "inclusterclient" in namespace "svcaccounts-3211" to be "running and ready" Nov 25 19:03:47.039: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 94.855244ms Nov 25 19:03:47.039: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-ft5h' to be 'Running' but was 'Pending' Nov 25 19:03:49.101: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156555996s Nov 25 19:03:49.101: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-ft5h' to be 'Running' but was 'Pending' Nov 25 19:03:51.111: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166891174s Nov 25 19:03:51.111: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-ft5h' to be 'Running' but was 'Pending' Nov 25 19:03:53.154: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209561582s Nov 25 19:03:53.154: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-ft5h' to be 'Running' but was 'Pending' Nov 25 19:03:55.131: INFO: Pod "inclusterclient": Phase="Running", Reason="", readiness=true. Elapsed: 8.186997618s Nov 25 19:03:55.131: INFO: Pod "inclusterclient" satisfied condition "running and ready" Nov 25 19:03:55.131: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [inclusterclient] Nov 25 19:03:55.132: INFO: pod is ready Nov 25 19:04:55.132: INFO: polling logs Nov 25 19:04:55.222: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 Nov 25 19:05:55.132: INFO: polling logs Nov 25 19:06:37.885: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 Nov 25 19:06:55.133: INFO: polling logs Nov 25 19:06:55.189: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 Nov 25 19:07:55.132: INFO: polling logs Nov 25 19:07:55.218: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 5m0.487s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 5m0.001s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:08:55.132: INFO: polling logs Nov 25 19:08:55.235: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 5m20.489s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 5m20.003s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 5m40.492s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 5m40.006s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 6m0.494s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 6m0.008s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:09:55.132: INFO: polling logs Nov 25 19:09:55.220: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 6m20.497s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 6m20.011s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 6m40.501s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 6m40.015s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 7m0.503s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 7m0.017s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:10:55.132: INFO: polling logs Nov 25 19:10:55.171: INFO: Error pulling logs: Get "https://104.198.13.163/api/v1/namespaces/svcaccounts-3211/pods/inclusterclient/log?container=inclusterclient&previous=false": dial tcp 104.198.13.163:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 7m20.505s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 7m20.019s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 7m40.507s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 7m40.021s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 8m0.509s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 8m0.023s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:11:55.132: INFO: polling logs Nov 25 19:11:55.270: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 8m20.511s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 8m20.025s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 8m40.513s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 8m40.027s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 9m0.516s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 9m0.029s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:12:55.132: INFO: polling logs Nov 25 19:12:55.176: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 9m20.518s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 9m20.032s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 9m40.52s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 9m40.034s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 10m0.522s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 10m0.036s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:13:55.132: INFO: polling logs Nov 25 19:13:55.176: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 10m20.599s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 10m20.113s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 10m40.601s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 10m40.115s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 11m0.604s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 11m0.118s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:14:55.132: INFO: polling logs Nov 25 19:14:55.259: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 11m20.606s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 11m20.12s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 11m40.61s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 11m40.123s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 12m0.612s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 12m0.126s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:15:55.132: INFO: polling logs ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 12m20.615s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 12m20.129s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc004916180, 0xc0017be500) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc004184d00, 0xc0017be500, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc0017edb80?}, 0xc0017be500?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc0017edb80, 0xc0017be500) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc002050210?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc004d78ed0, 0xc0017be400) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc004181f40, 0xc0017be300) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0017be300, {0x7fad100, 0xc004181f40}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc004d78f00, 0xc0017be300, {0x7f601cf7e108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc004d78f00, 0xc0017be300) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0017be100, {0x7fe0bc8, 0xc0000820e0}, 0x7f5fede47da0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0017be100, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc00279e340?}, {0xc0054a7380, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 12m40.619s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 12m40.132s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc004916180, 0xc0017be500) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc004184d00, 0xc0017be500, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc0017edb80?}, 0xc0017be500?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc0017edb80, 0xc0017be500) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc002050210?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc004d78ed0, 0xc0017be400) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc004181f40, 0xc0017be300) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0017be300, {0x7fad100, 0xc004181f40}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc004d78f00, 0xc0017be300, {0x7f601cf7e108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc004d78f00, 0xc0017be300) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0017be100, {0x7fe0bc8, 0xc0000820e0}, 0x7f5fede47da0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0017be100, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc00279e340?}, {0xc0054a7380, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:16:38.437: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 13m0.621s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 13m0.134s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:16:55.132: INFO: polling logs Nov 25 19:16:55.256: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 13m20.623s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 13m20.137s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 13m40.626s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 13m40.139s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 14m0.628s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 14m0.142s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:17:55.132: INFO: polling logs ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 14m20.629s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 14m20.143s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc004916180, 0xc0017be500) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc004184d00, 0xc0017be500, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc0017edb80?}, 0xc0017be500?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc0017edb80, 0xc0017be500) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc0026f01b0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc004d78ed0, 0xc0017be400) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc004181f40, 0xc0017be300) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0017be300, {0x7fad100, 0xc004181f40}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc004d78f00, 0xc0017be300, {0x7f601cf7e108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc004d78f00, 0xc0017be300) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0017be100, {0x7fe0bc8, 0xc0000820e0}, 0x7f5fec24b170?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0017be100, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc00279e340?}, {0xc0054a7380, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:18:19.419: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 14m40.631s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 14m40.145s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 15m0.633s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 15m0.147s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:18:55.132: INFO: polling logs Nov 25 19:18:55.187: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 15m20.635s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 15m20.149s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 15m40.637s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 15m40.151s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 16m0.64s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 16m0.153s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:19:55.132: INFO: polling logs Nov 25 19:19:55.176: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 16m20.642s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 16m20.156s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 16m40.644s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 16m40.158s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 17m0.647s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 17m0.16s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:20:55.132: INFO: polling logs Nov 25 19:20:55.171: INFO: Error pulling logs: Get "https://104.198.13.163/api/v1/namespaces/svcaccounts-3211/pods/inclusterclient/log?container=inclusterclient&previous=false": dial tcp 104.198.13.163:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 17m20.649s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 17m20.163s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 17m40.651s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 17m40.165s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 18m0.653s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 18m0.167s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:21:55.133: INFO: polling logs Nov 25 19:21:55.172: INFO: Error pulling logs: Get "https://104.198.13.163/api/v1/namespaces/svcaccounts-3211/pods/inclusterclient/log?container=inclusterclient&previous=false": dial tcp 104.198.13.163:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 18m20.655s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 18m20.169s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 18m40.657s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 18m40.171s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 19m0.66s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 19m0.173s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:22:55.132: INFO: polling logs Nov 25 19:22:55.275: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 19m20.661s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 19m20.175s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 19m40.663s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 19m40.177s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #15 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 20m0.666s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 20m0.179s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 1645 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004ab1158, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc00349be08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00252bb00, 0xc004a949c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:23:55.132: INFO: polling logs Nov 25 19:23:55.180: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 Nov 25 19:23:55.180: INFO: polling logs Nov 25 19:23:55.225: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 Nov 25 19:23:55.225: FAIL: Unexpected error: timed out waiting for the condition I1125 19:03:48.732997 1 main.go:61] started I1125 19:04:18.737076 1 main.go:79] calling /healthz I1125 19:04:18.737628 1 main.go:96] authz_header=-gdA__3AnWeKj2yuj9cI1IDrDc_s90bbHJnEuBDkLHY I1125 19:04:48.737099 1 main.go:79] calling /healthz I1125 19:04:48.737290 1 main.go:96] authz_header=-gdA__3AnWeKj2yuj9cI1IDrDc_s90bbHJnEuBDkLHY Full Stack Trace k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:520 +0x9ab [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 Nov 25 19:23:55.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 19:23:55.308 STEP: Collecting events from namespace "svcaccounts-3211". 11/25/22 19:23:55.308 STEP: Found 5 events. 11/25/22 19:24:54.51 Nov 25 19:24:54.510: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for inclusterclient: { } Scheduled: Successfully assigned svcaccounts-3211/inclusterclient to bootstrap-e2e-minion-group-ft5h Nov 25 19:24:54.510: INFO: At 2022-11-25 19:03:48 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-ft5h} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:24:54.510: INFO: At 2022-11-25 19:03:48 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-ft5h} Created: Created container inclusterclient Nov 25 19:24:54.510: INFO: At 2022-11-25 19:03:48 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-ft5h} Started: Started container inclusterclient Nov 25 19:24:54.510: INFO: At 2022-11-25 19:05:10 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-ft5h} Killing: Stopping container inclusterclient Nov 25 19:24:54.551: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 19:24:54.551: INFO: inclusterclient bootstrap-e2e-minion-group-ft5h Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:05:11 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:05:11 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:46 +0000 UTC }] Nov 25 19:24:54.551: INFO: Nov 25 19:24:54.598: INFO: Unable to fetch svcaccounts-3211/inclusterclient/inclusterclient logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) Nov 25 19:24:54.645: INFO: Logging node info for node bootstrap-e2e-master Nov 25 19:24:54.687: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 063993ed-280a-4252-9455-5251e2ae7f68 13208 0 2022-11-25 18:55:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 19:22:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:22:18 +0000 UTC,LastTransitionTime:2022-11-25 18:55:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:22:18 +0000 UTC,LastTransitionTime:2022-11-25 18:55:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:22:18 +0000 UTC,LastTransitionTime:2022-11-25 18:55:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:22:18 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:104.198.13.163,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:baec19a097c341ae8d14b5ee519a12bc,SystemUUID:baec19a0-97c3-41ae-8d14-b5ee519a12bc,BootID:cbb52bbc-4a45-4571-8271-7b01e70f9d0d,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:24:54.687: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 19:24:54.733: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 19:24:54.775: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 19:24:54.775: INFO: Logging node info for node bootstrap-e2e-minion-group-ft5h Nov 25 19:24:54.817: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ft5h f6d0c520-a72a-4938-9464-c37052e3eead 13441 0 2022-11-25 18:55:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ft5h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-ft5h topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-2558":"bootstrap-e2e-minion-group-ft5h","csi-hostpath-provisioning-8147":"bootstrap-e2e-minion-group-ft5h","csi-mock-csi-mock-volumes-1772":"bootstrap-e2e-minion-group-ft5h","csi-mock-csi-mock-volumes-4834":"bootstrap-e2e-minion-group-ft5h","csi-mock-csi-mock-volumes-9297":"bootstrap-e2e-minion-group-ft5h"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 19:15:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 19:20:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 19:24:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-minion-group-ft5h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:20:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:20:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:20:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:20:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:20:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:20:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:20:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:22:17 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:22:17 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:22:17 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:22:17 +0000 UTC,LastTransitionTime:2022-11-25 18:55:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.110.94,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ft5h.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ft5h.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b840d0c08d09e2ff89c00115dd74e373,SystemUUID:b840d0c0-8d09-e2ff-89c0-0115dd74e373,BootID:2c546fd1-5b2e-4c92-9b03-025eb8882457,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-8147^25a6945e-6cf4-11ed-85c7-f6a638f4faa2],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-8147^25a6945e-6cf4-11ed-85c7-f6a638f4faa2,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2558^8ae769bd-6cf5-11ed-8f84-d2471db8189f,DevicePath:,},},Config:nil,},} Nov 25 19:24:54.818: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ft5h Nov 25 19:24:54.861: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ft5h Nov 25 19:24:54.904: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-ft5h: error trying to reach service: No agent available Nov 25 19:24:54.904: INFO: Logging node info for node bootstrap-e2e-minion-group-p8wv Nov 25 19:24:54.946: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-p8wv 881c9872-bf9e-40c3-a0e6-f3f276af90f5 13443 0 2022-11-25 18:55:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-p8wv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-p8wv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-5865":"bootstrap-e2e-minion-group-p8wv","csi-mock-csi-mock-volumes-8447":"csi-mock-csi-mock-volumes-8447","csi-mock-csi-mock-volumes-8606":"bootstrap-e2e-minion-group-p8wv"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 19:14:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-25 19:20:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 19:24:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-minion-group-p8wv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:20:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:20:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:20:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:20:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:20:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:20:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:20:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:20:06 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:20:06 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:20:06 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:20:06 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.198.109.246,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-p8wv.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-p8wv.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f8ac2f5b7ee9732248672e4b22a9ad9,SystemUUID:7f8ac2f5-b7ee-9732-2486-72e4b22a9ad9,BootID:ba8ee318-1295-42a5-a59a-f3bfc254bc58,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:24:54.947: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-p8wv Nov 25 19:24:55.022: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-p8wv Nov 25 19:24:55.134: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-p8wv: error trying to reach service: No agent available Nov 25 19:24:55.134: INFO: Logging node info for node bootstrap-e2e-minion-group-rvwg Nov 25 19:24:55.175: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-rvwg d72b04ed-8c3e-4237-a1a1-842914101de6 13333 0 2022-11-25 18:55:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-rvwg kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-rvwg topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volumeio-2766":"bootstrap-e2e-minion-group-rvwg"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 19:08:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-25 19:20:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 19:23:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-minion-group-rvwg,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:20:45 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:20:45 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:20:45 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:20:45 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:20:45 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:20:45 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:20:45 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:22:24 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:22:24 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:22:24 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:22:24 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.2.247,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-rvwg.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-rvwg.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dbf5cb99be0ec068c5cd2f1643938098,SystemUUID:dbf5cb99-be0e-c068-c5cd-2f1643938098,BootID:e2f727ad-5c6c-4e26-854f-4f7e80c2c71f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:24:55.176: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-rvwg Nov 25 19:24:55.220: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-rvwg Nov 25 19:24:55.264: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-rvwg: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 STEP: Destroying namespace "svcaccounts-3211" for this suite. 11/25/22 19:24:55.264
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never$'
test/e2e/kubectl/kubectl.go:415 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245 There were additional failures detected after the initial failure: [FAILED] Nov 25 18:59:52.100: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7081 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://104.198.13.163/api/v1/namespaces/kubectl-7081/pods/httpd": dial tcp 104.198.13.163:443: connect: connection refused error: exit status 1 In [AfterEach] at: test/e2e/framework/kubectl/builder.go:87 ---------- [FAILED] Nov 25 18:59:52.180: failed to list events in namespace "kubectl-7081": Get "https://104.198.13.163/api/v1/namespaces/kubectl-7081/events": dial tcp 104.198.13.163:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:59:52.220: Couldn't delete ns: "kubectl-7081": Delete "https://104.198.13.163/api/v1/namespaces/kubectl-7081": dial tcp 104.198.13.163:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://104.198.13.163/api/v1/namespaces/kubectl-7081", Err:(*net.OpError)(0xc003ecc5f0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:58:20.639 Nov 25 18:58:20.639: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/25/22 18:58:20.641 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:58:20.78 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:58:20.862 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/25/22 18:58:20.95 Nov 25 18:58:20.951: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7081 create -f -' Nov 25 18:58:21.871: INFO: stderr: "" Nov 25 18:58:21.871: INFO: stdout: "pod/httpd created\n" Nov 25 18:58:21.871: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 25 18:58:21.871: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-7081" to be "running and ready" Nov 25 18:58:21.943: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 72.121127ms Nov 25 18:58:21.943: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' to be 'Running' but was 'Pending' Nov 25 18:58:23.995: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123785732s Nov 25 18:58:23.995: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' to be 'Running' but was 'Pending' Nov 25 18:58:25.988: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116905659s Nov 25 18:58:25.988: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' to be 'Running' but was 'Pending' Nov 25 18:58:27.986: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114791842s Nov 25 18:58:27.986: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' to be 'Running' but was 'Pending' Nov 25 18:58:30.051: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179562139s Nov 25 18:58:30.051: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' to be 'Running' but was 'Pending' Nov 25 18:58:31.988: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.117015277s Nov 25 18:58:31.988: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:58:34.017: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.145796519s Nov 25 18:58:34.017: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:58:35.986: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.115152385s Nov 25 18:58:35.986: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:58:37.985: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.1138911s Nov 25 18:58:37.985: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:58:40.006: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 18.134696135s Nov 25 18:58:40.006: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:58:41.989: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 20.117850106s Nov 25 18:58:41.989: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:58:43.987: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 22.115973738s Nov 25 18:58:43.987: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:58:46.005: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 24.133448421s Nov 25 18:58:46.005: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:58:47.989: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 26.117993032s Nov 25 18:58:47.989: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:58:49.988: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 28.116629868s Nov 25 18:58:49.988: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:58:52.004: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 30.132615318s Nov 25 18:58:52.004: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:58:53.987: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 32.115529144s Nov 25 18:58:53.987: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:58:55.987: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 34.115939968s Nov 25 18:58:55.987: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:58:57.986: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 36.114752855s Nov 25 18:58:57.986: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:58:59.986: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 38.115176483s Nov 25 18:58:59.987: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:01.986: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 40.114729885s Nov 25 18:59:01.986: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:03.986: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 42.114870762s Nov 25 18:59:03.986: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:05.989: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 44.117316471s Nov 25 18:59:05.989: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:07.996: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 46.125148141s Nov 25 18:59:07.996: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:09.988: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 48.117220041s Nov 25 18:59:09.989: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:11.989: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 50.11809409s Nov 25 18:59:11.989: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:13.987: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 52.115295115s Nov 25 18:59:13.987: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:15.995: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 54.123932863s Nov 25 18:59:15.995: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:17.987: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 56.115356343s Nov 25 18:59:17.987: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:19.988: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 58.116499701s Nov 25 18:59:19.988: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:21.987: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.116036438s Nov 25 18:59:21.987: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:23.986: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.114406027s Nov 25 18:59:23.986: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:25.987: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.115749823s Nov 25 18:59:25.987: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:27.987: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.115870473s Nov 25 18:59:27.987: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:29.987: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.1160623s Nov 25 18:59:29.987: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:31.997: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.12590453s Nov 25 18:59:31.997: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:33.988: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.116763002s Nov 25 18:59:33.988: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:04 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:35.988: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.117153869s Nov 25 18:59:35.988: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:37.996: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.124972543s Nov 25 18:59:37.996: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:39.987: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.115596676s Nov 25 18:59:39.987: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:41.987: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.115730651s Nov 25 18:59:41.987: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:43.986: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.115074531s Nov 25 18:59:43.986: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:46.015: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.143743301s Nov 25 18:59:46.015: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:47.995: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.124001886s Nov 25 18:59:47.995: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:50.049: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.177931208s Nov 25 18:59:50.049: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-p8wv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:59:33 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:21 +0000 UTC }] Nov 25 18:59:51.983: INFO: Encountered non-retryable error while getting pod kubectl-7081/httpd: Get "https://104.198.13.163/api/v1/namespaces/kubectl-7081/pods/httpd": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 18:59:51.983: INFO: Pod httpd failed to be running and ready. Nov 25 18:59:51.983: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [httpd] Nov 25 18:59:51.984: FAIL: Expected <bool>: false to equal <bool>: true Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245 [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/25/22 18:59:51.984 Nov 25 18:59:51.984: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7081 delete --grace-period=0 --force -f -' Nov 25 18:59:52.100: INFO: rc: 1 Nov 25 18:59:52.100: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc000dfead0>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7081 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when deleting \"STDIN\": Delete \"https://104.198.13.163/api/v1/namespaces/kubectl-7081/pods/httpd\": dial tcp 104.198.13.163:443: connect: connection refused\n\nerror:\nexit status 1", }, Code: 1, } Nov 25 18:59:52.100: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7081 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://104.198.13.163/api/v1/namespaces/kubectl-7081/pods/httpd": dial tcp 104.198.13.163:443: connect: connection refused error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc00100bce0?, 0x0?}, {0xc0008ce320, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc0008ce320, 0xc}, {0xc003e71a20, 0x145}, {0xc000c51ec0?, 0x8?, 0x7f59537bc5b8?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc003e71a20, 0x145}, {0xc0008ce320, 0xc}, {0xc000e28630, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 25 18:59:52.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:59:52.14 STEP: Collecting events from namespace "kubectl-7081". 11/25/22 18:59:52.14 Nov 25 18:59:52.180: INFO: Unexpected error: failed to list events in namespace "kubectl-7081": <*url.Error | 0xc0019a6000>: { Op: "Get", URL: "https://104.198.13.163/api/v1/namespaces/kubectl-7081/events", Err: <*net.OpError | 0xc000cf4230>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000c4bf20>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 13, 163], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00117f480>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:59:52.180: FAIL: failed to list events in namespace "kubectl-7081": Get "https://104.198.13.163/api/v1/namespaces/kubectl-7081/events": dial tcp 104.198.13.163:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc001b645c0, {0xc0008ce320, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0014d5520}, {0xc0008ce320, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001b64650?, {0xc0008ce320?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000d3e2d0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000f81920?, 0xc001054fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc0017ce708?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000f81920?, 0x29449fc?}, {0xae73300?, 0xc001054f80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-7081" for this suite. 11/25/22 18:59:52.18 Nov 25 18:59:52.220: FAIL: Couldn't delete ns: "kubectl-7081": Delete "https://104.198.13.163/api/v1/namespaces/kubectl-7081": dial tcp 104.198.13.163:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://104.198.13.163/api/v1/namespaces/kubectl-7081", Err:(*net.OpError)(0xc003ecc5f0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000d3e2d0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000f81830?, 0xc0000cefb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000f81830?, 0x0?}, {0xae73300?, 0x5?, 0xc001061848?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\shandle\supdates\sto\sExternalTrafficPolicy\sfield$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000a4e460, {0x75c6f7c, 0x9}, 0xc0009af260) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000a4e460, 0x7fd7787b67c8?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000a4e460, 0x3b?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000e2a000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 +0x417 There were additional failures detected after the initial failure: [FAILED] Nov 25 18:59:52.032: failed to list events in namespace "esipp-730": Get "https://104.198.13.163/api/v1/namespaces/esipp-730/events": dial tcp 104.198.13.163:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 18:59:52.072: Couldn't delete ns: "esipp-730": Delete "https://104.198.13.163/api/v1/namespaces/esipp-730": dial tcp 104.198.13.163:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://104.198.13.163/api/v1/namespaces/esipp-730", Err:(*net.OpError)(0xc0015acd70)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:58:20.65 Nov 25 18:58:20.650: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 18:58:20.672 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:58:20.83 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:58:20.911 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should handle updates to ExternalTrafficPolicy field test/e2e/network/loadbalancer.go:1480 STEP: creating a service esipp-730/external-local-update with type=LoadBalancer 11/25/22 18:58:21.476 STEP: setting ExternalTrafficPolicy=Local 11/25/22 18:58:21.476 STEP: waiting for loadbalancer for service esipp-730/external-local-update 11/25/22 18:58:21.93 Nov 25 18:58:21.930: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer STEP: creating a pod to be part of the service external-local-update 11/25/22 18:59:02.051 Nov 25 18:59:02.101: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 18:59:02.157: INFO: Found all 1 pods Nov 25 18:59:02.157: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-update-gqvff] Nov 25 18:59:02.157: INFO: Waiting up to 2m0s for pod "external-local-update-gqvff" in namespace "esipp-730" to be "running and ready" Nov 25 18:59:02.200: INFO: Pod "external-local-update-gqvff": Phase="Pending", Reason="", readiness=false. Elapsed: 42.724637ms Nov 25 18:59:02.200: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-gqvff' on 'bootstrap-e2e-minion-group-p8wv' to be 'Running' but was 'Pending' Nov 25 18:59:04.245: INFO: Pod "external-local-update-gqvff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087341773s Nov 25 18:59:04.245: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-gqvff' on 'bootstrap-e2e-minion-group-p8wv' to be 'Running' but was 'Pending' Nov 25 18:59:06.243: INFO: Pod "external-local-update-gqvff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085376415s Nov 25 18:59:06.243: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-gqvff' on 'bootstrap-e2e-minion-group-p8wv' to be 'Running' but was 'Pending' Nov 25 18:59:08.245: INFO: Pod "external-local-update-gqvff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087618519s Nov 25 18:59:08.245: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-gqvff' on 'bootstrap-e2e-minion-group-p8wv' to be 'Running' but was 'Pending' Nov 25 18:59:10.253: INFO: Pod "external-local-update-gqvff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095837864s Nov 25 18:59:10.253: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-gqvff' on 'bootstrap-e2e-minion-group-p8wv' to be 'Running' but was 'Pending' Nov 25 18:59:12.242: INFO: Pod "external-local-update-gqvff": Phase="Running", Reason="", readiness=true. Elapsed: 10.085017448s Nov 25 18:59:12.242: INFO: Pod "external-local-update-gqvff" satisfied condition "running and ready" Nov 25 18:59:12.242: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-update-gqvff] STEP: waiting for loadbalancer for service esipp-730/external-local-update 11/25/22 18:59:12.242 Nov 25 18:59:12.242: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer STEP: turning ESIPP off 11/25/22 18:59:12.283 STEP: Performing setup for networking test in namespace esipp-730 11/25/22 18:59:13.468 STEP: creating a selector 11/25/22 18:59:13.468 STEP: Creating the service pods in kubernetes 11/25/22 18:59:13.468 Nov 25 18:59:13.468: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 25 18:59:13.702: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-730" to be "running and ready" Nov 25 18:59:13.744: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 41.839397ms Nov 25 18:59:13.744: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 18:59:15.787: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.085331967s Nov 25 18:59:15.787: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:17.785: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.082998923s Nov 25 18:59:17.785: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:19.792: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.089490694s Nov 25 18:59:19.792: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:21.791: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.088677455s Nov 25 18:59:21.791: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:23.786: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.083621892s Nov 25 18:59:23.786: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:25.786: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.084029669s Nov 25 18:59:25.786: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:27.786: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.084157846s Nov 25 18:59:27.786: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:29.791: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.088863658s Nov 25 18:59:29.791: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:31.791: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.088627014s Nov 25 18:59:31.791: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:33.786: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.083549254s Nov 25 18:59:33.786: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:35.785: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.083333873s Nov 25 18:59:35.785: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:37.826: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.124235984s Nov 25 18:59:37.826: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:39.788: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.086354141s Nov 25 18:59:39.788: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:41.787: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.084948302s Nov 25 18:59:41.787: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:43.788: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.086107065s Nov 25 18:59:43.788: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:45.795: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.092543631s Nov 25 18:59:45.795: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:47.807: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.104675243s Nov 25 18:59:47.807: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:49.793: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.090661622s Nov 25 18:59:49.793: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 18:59:51.784: INFO: Encountered non-retryable error while getting pod esipp-730/netserver-0: Get "https://104.198.13.163/api/v1/namespaces/esipp-730/pods/netserver-0": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 18:59:51.785: INFO: Unexpected error: <*fmt.wrapError | 0xc000e0e860>: { msg: "error while waiting for pod esipp-730/netserver-0 to be running and ready: Get \"https://104.198.13.163/api/v1/namespaces/esipp-730/pods/netserver-0\": dial tcp 104.198.13.163:443: connect: connection refused", err: <*url.Error | 0xc001e1c0c0>{ Op: "Get", URL: "https://104.198.13.163/api/v1/namespaces/esipp-730/pods/netserver-0", Err: <*net.OpError | 0xc002cf3f90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001c67f50>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 13, 163], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000e0e820>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Nov 25 18:59:51.785: FAIL: error while waiting for pod esipp-730/netserver-0 to be running and ready: Get "https://104.198.13.163/api/v1/namespaces/esipp-730/pods/netserver-0": dial tcp 104.198.13.163:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000a4e460, {0x75c6f7c, 0x9}, 0xc0009af260) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000a4e460, 0x7fd7787b67c8?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000a4e460, 0x3b?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000e2a000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 +0x417 Nov 25 18:59:51.824: INFO: Unexpected error: <*errors.errorString | 0xc0016d88a0>: { s: "failed to get Service \"external-local-update\": Get \"https://104.198.13.163/api/v1/namespaces/esipp-730/services/external-local-update\": dial tcp 104.198.13.163:443: connect: connection refused", } Nov 25 18:59:51.824: FAIL: failed to get Service "external-local-update": Get "https://104.198.13.163/api/v1/namespaces/esipp-730/services/external-local-update": dial tcp 104.198.13.163:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.7.1() test/e2e/network/loadbalancer.go:1495 +0xae panic({0x70eb7e0, 0xc0003f2a10}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc000f58c30, 0xce}, {0xc000a290e0?, 0xc000f58c30?, 0xc000a29108?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3f20, 0xc000e0e860}, {0x0?, 0xc0040c2010?, 0xc000a29200?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000a4e460, {0x75c6f7c, 0x9}, 0xc0009af260) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000a4e460, 0x7fd7787b67c8?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000a4e460, 0x3b?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000e2a000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 +0x417 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 18:59:51.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 18:59:51.865: INFO: Output of kubectl describe svc: Nov 25 18:59:51.865: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=esipp-730 describe svc --namespace=esipp-730' Nov 25 18:59:51.992: INFO: rc: 1 Nov 25 18:59:51.992: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 18:59:51.992 STEP: Collecting events from namespace "esipp-730". 11/25/22 18:59:51.993 Nov 25 18:59:52.032: INFO: Unexpected error: failed to list events in namespace "esipp-730": <*url.Error | 0xc0017dc9f0>: { Op: "Get", URL: "https://104.198.13.163/api/v1/namespaces/esipp-730/events", Err: <*net.OpError | 0xc0015aca50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001e1cb40>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 13, 163], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003d23e00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 18:59:52.032: FAIL: failed to list events in namespace "esipp-730": Get "https://104.198.13.163/api/v1/namespaces/esipp-730/events": dial tcp 104.198.13.163:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc000b505c0, {0xc0040c2010, 0x9}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0043fc000}, {0xc0040c2010, 0x9}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc000b50650?, {0xc0040c2010?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000e2a000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0010a5c10?, 0xc001644fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0010a5c10?, 0x0?}, {0xae73300?, 0x5?, 0xc000a031a0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-730" for this suite. 11/25/22 18:59:52.033 Nov 25 18:59:52.072: FAIL: Couldn't delete ns: "esipp-730": Delete "https://104.198.13.163/api/v1/namespaces/esipp-730": dial tcp 104.198.13.163:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://104.198.13.163/api/v1/namespaces/esipp-730", Err:(*net.OpError)(0xc0015acd70)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000e2a000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0010a5b00?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0010a5b00?, 0x7fe0bc8?}, {0xae73300?, 0x100000000000000?, 0xc001723770?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\sonly\starget\snodes\swith\sendpoints$'
test/e2e/network/loadbalancer.go:1416 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1416 +0x9a8from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:59:14.395 Nov 25 18:59:14.395: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 18:59:14.397 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:59:14.704 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:59:14.823 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should only target nodes with endpoints test/e2e/network/loadbalancer.go:1346 STEP: creating a service esipp-2809/external-local-nodes with type=LoadBalancer 11/25/22 18:59:15.081 STEP: setting ExternalTrafficPolicy=Local 11/25/22 18:59:15.081 STEP: waiting for loadbalancer for service esipp-2809/external-local-nodes 11/25/22 18:59:15.156 Nov 25 18:59:15.156: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer Nov 25 18:59:51.245: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 18:59:53.246: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 18:59:55.245: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 18:59:57.245: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 18:59:59.245: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:01.245: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:03.245: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:05.246: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:07.245: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:09.245: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:11.245: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:13.246: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:15.245: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:17.245: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:19.245: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:21.245: INFO: Retrying .... error trying to get Service external-local-nodes: Get "https://104.198.13.163/api/v1/namespaces/esipp-2809/services/external-local-nodes": dial tcp 104.198.13.163:443: connect: connection refused STEP: waiting for loadbalancer for service esipp-2809/external-local-nodes 11/25/22 19:03:49.332 Nov 25 19:03:49.332: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer STEP: Performing setup for networking test in namespace esipp-2809 11/25/22 19:03:49.434 STEP: creating a selector 11/25/22 19:03:49.434 STEP: Creating the service pods in kubernetes 11/25/22 19:03:49.434 Nov 25 19:03:49.434: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 25 19:03:49.961: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-2809" to be "running and ready" Nov 25 19:03:50.035: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 73.947856ms Nov 25 19:03:50.035: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 19:03:52.107: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146566756s Nov 25 19:03:52.107: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 19:03:54.093: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132351538s Nov 25 19:03:54.093: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 19:03:56.108: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.146918802s Nov 25 19:03:56.108: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:03:58.088: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.126966399s Nov 25 19:03:58.088: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:04:00.078: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.117283111s Nov 25 19:04:00.078: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:04:02.120: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.159542959s Nov 25 19:04:02.120: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:04:04.079: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.117801085s Nov 25 19:04:04.079: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:04:06.078: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.117184974s Nov 25 19:04:06.078: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:04:08.077: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.115844641s Nov 25 19:04:08.077: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:04:10.078: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.11723131s Nov 25 19:04:10.078: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:04:12.077: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.116297341s Nov 25 19:04:12.077: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:04:14.078: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.1168212s Nov 25 19:04:14.078: INFO: The phase of Pod netserver-0 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #3 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 5m0.638s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 25.598s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 811 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004794b88, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004bdb5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000712820}, {0xc00485c520, 0xa}, {0xc004a44ff0, 0xb}, {0x75ee704, 0x11}, 0x7f8f401?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000712820?}, {0xc004a44ff0?, 0xc004eb1420?}, {0xc00485c520?, 0xc000b61820?}, 0x271e5fe?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc004822000, {0x75c6f7c, 0x9}, 0xc004ad9c50) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc004822000, 0x7fa3c0034310?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc004822000, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0011fa000, {0x0, 0x0, 0xc004a8f190?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00457b200}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:04:16.077: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.116239537s Nov 25 19:04:16.077: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:04:18.077: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.116177931s Nov 25 19:04:18.077: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:04:20.077: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 30.116478428s Nov 25 19:04:20.077: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 25 19:04:20.077: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 25 19:04:20.119: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-2809" to be "running and ready" Nov 25 19:04:20.165: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 45.623435ms Nov 25 19:04:20.165: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:22.209: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.089724913s Nov 25 19:04:22.209: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:24.238: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.118341401s Nov 25 19:04:24.238: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:26.230: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 6.111275696s Nov 25 19:04:26.230: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:28.226: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 8.10670154s Nov 25 19:04:28.226: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:30.220: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 10.100519918s Nov 25 19:04:30.220: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:32.240: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 12.121253354s Nov 25 19:04:32.240: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:34.217: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 14.09762775s Nov 25 19:04:34.217: INFO: The phase of Pod netserver-1 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #3 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 5m20.641s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m20.004s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 45.601s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 811 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004795830, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004bdb5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000712820}, {0xc00485c520, 0xa}, {0xc0048cb243, 0xb}, {0x75ee704, 0x11}, 0xc0046c3c20?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000712820?}, {0xc0048cb243?, 0x0?}, {0xc00485c520?, 0x0?}, 0xc0046fec20?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc004822000, {0x75c6f7c, 0x9}, 0xc004ad9c50) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc004822000, 0x7fa3c0034310?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc004822000, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0011fa000, {0x0, 0x0, 0xc004a8f190?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00457b200}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:04:36.223: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 16.103377978s Nov 25 19:04:36.223: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:38.219: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 18.099700886s Nov 25 19:04:38.219: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:40.245: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 20.125517365s Nov 25 19:04:40.245: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:42.218: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 22.099001696s Nov 25 19:04:42.218: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:44.215: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 24.095370502s Nov 25 19:04:44.215: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:46.231: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 26.112126637s Nov 25 19:04:46.231: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:48.231: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 28.112307893s Nov 25 19:04:48.231: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:50.231: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 30.111747007s Nov 25 19:04:50.231: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:52.230: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 32.110960271s Nov 25 19:04:52.230: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:54.220: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 34.10130198s Nov 25 19:04:54.220: INFO: The phase of Pod netserver-1 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #3 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 5m40.643s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m40.005s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating the service pods in kubernetes (Step Runtime: 1m5.603s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 811 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc004795830, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x88?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004bdb5d8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000712820}, {0xc00485c520, 0xa}, {0xc0048cb243, 0xb}, {0x75ee704, 0x11}, 0xc0046c3c20?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000712820?}, {0xc0048cb243?, 0x0?}, {0xc00485c520?, 0x0?}, 0xc0046fec20?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc004822000, {0x75c6f7c, 0x9}, 0xc004ad9c50) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc004822000, 0x7fa3c0034310?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc004822000, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0011fa000, {0x0, 0x0, 0xc004a8f190?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00457b200}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:04:56.211: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 36.091972708s Nov 25 19:04:56.211: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:04:58.249: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 38.130316567s Nov 25 19:04:58.250: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:05:00.212: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 40.092486456s Nov 25 19:05:00.212: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:05:02.280: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 42.160850313s Nov 25 19:05:02.280: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:05:04.218: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 44.099004328s Nov 25 19:05:04.218: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:05:06.302: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 46.182436178s Nov 25 19:05:06.302: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:05:08.238: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 48.118727897s Nov 25 19:05:08.238: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:05:10.296: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 50.176826899s Nov 25 19:05:10.296: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:05:12.254: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 52.135056564s Nov 25 19:05:12.254: INFO: The phase of Pod netserver-1 is Running (Ready = true) Nov 25 19:05:12.254: INFO: Pod "netserver-1" satisfied condition "running and ready" Nov 25 19:05:12.312: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "esipp-2809" to be "running and ready" Nov 25 19:05:12.376: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 63.802268ms Nov 25 19:05:12.376: INFO: The phase of Pod netserver-2 is Running (Ready = true) Nov 25 19:05:12.376: INFO: Pod "netserver-2" satisfied condition "running and ready" STEP: Creating test pods 11/25/22 19:05:12.46 Nov 25 19:05:12.555: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "esipp-2809" to be "running" Nov 25 19:05:12.613: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 57.832348ms Nov 25 19:05:14.678: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12313968s ------------------------------ Progress Report for Ginkgo Process #3 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 6m0.644s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 6m0.007s) test/e2e/network/loadbalancer.go:1346 At [By Step] Creating test pods (Step Runtime: 2.579s) test/e2e/framework/network/utils.go:765 Spec Goroutine goroutine 811 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047948a0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x98?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004bdb8e8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000712820}, {0xc00485c520, 0xa}, {0x75f4fa6, 0x12}, {0x75c00ca, 0x7}, 0x0?, 0x7895ad8) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodRunningInNamespace({0x801de88?, 0xc000712820?}, {0x75f4fa6?, 0x0?}, {0xc00485c520?, 0x0?}, 0x0?) test/e2e/framework/pod/wait.go:522 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodNameRunningInNamespace(...) test/e2e/framework/pod/wait.go:510 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createTestPods(0xc004822000) test/e2e/framework/network/utils.go:727 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc004822000, 0x7fa3c0034310?) test/e2e/framework/network/utils.go:766 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc004822000, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0011fa000, {0x0, 0x0, 0xc004a8f190?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00457b200}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:05:16.669: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114397021s Nov 25 19:05:18.663: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10821756s Nov 25 19:05:20.676: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.121231938s Nov 25 19:05:20.676: INFO: Pod "test-container-pod" satisfied condition "running" Nov 25 19:05:20.764: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 STEP: Getting node addresses 11/25/22 19:05:20.764 Nov 25 19:05:20.764: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes 11/25/22 19:05:20.906 Nov 25 19:05:21.200: INFO: Service node-port-service in namespace esipp-2809 found. Nov 25 19:05:21.454: INFO: Service session-affinity-service in namespace esipp-2809 found. STEP: Waiting for NodePort service to expose endpoint 11/25/22 19:05:21.535 Nov 25 19:05:22.535: INFO: Waiting for amount of service:node-port-service endpoints to be 3 STEP: Waiting for Session Affinity service to expose endpoint 11/25/22 19:05:22.604 Nov 25 19:05:23.604: INFO: Waiting for amount of service:session-affinity-service endpoints to be 3 STEP: creating a pod to be part of the service external-local-nodes on node bootstrap-e2e-minion-group-ft5h 11/25/22 19:05:23.653 Nov 25 19:05:23.748: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 19:05:23.848: INFO: Found all 1 pods Nov 25 19:05:23.848: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-nodes-98744] Nov 25 19:05:23.848: INFO: Waiting up to 2m0s for pod "external-local-nodes-98744" in namespace "esipp-2809" to be "running and ready" Nov 25 19:05:23.909: INFO: Pod "external-local-nodes-98744": Phase="Pending", Reason="", readiness=false. Elapsed: 60.758375ms Nov 25 19:05:23.909: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodes-98744' on 'bootstrap-e2e-minion-group-ft5h' to be 'Running' but was 'Pending' Nov 25 19:05:25.964: INFO: Pod "external-local-nodes-98744": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116025827s Nov 25 19:05:25.964: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodes-98744' on 'bootstrap-e2e-minion-group-ft5h' to be 'Running' but was 'Pending' Nov 25 19:05:27.977: INFO: Pod "external-local-nodes-98744": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129091541s Nov 25 19:05:27.977: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodes-98744' on 'bootstrap-e2e-minion-group-ft5h' to be 'Running' but was 'Pending' Nov 25 19:05:29.981: INFO: Pod "external-local-nodes-98744": Phase="Running", Reason="", readiness=true. Elapsed: 6.132787427s Nov 25 19:05:29.981: INFO: Pod "external-local-nodes-98744" satisfied condition "running and ready" Nov 25 19:05:29.981: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-nodes-98744] STEP: waiting for service endpoint on node bootstrap-e2e-minion-group-ft5h 11/25/22 19:05:29.981 Nov 25 19:05:30.051: INFO: Pod for service esipp-2809/external-local-nodes is on node bootstrap-e2e-minion-group-ft5h Nov 25 19:05:30.051: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" ------------------------------ Progress Report for Ginkgo Process #3 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 6m20.647s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 6m20.009s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for service endpoint on node bootstrap-e2e-minion-group-ft5h (Step Runtime: 5.06s) test/e2e/network/loadbalancer.go:1395 Spec Goroutine goroutine 811 [select] net/http.(*Transport).getConn(0xc000ca0a00, 0xc004a906c0, {{}, 0x0, {0xc0010b6f30, 0x4}, {0xc0048ca4c0, 0x10}, 0x0}) /usr/local/go/src/net/http/transport.go:1376 net/http.(*Transport).roundTrip(0xc000ca0a00, 0xc000d38a00) /usr/local/go/src/net/http/transport.go:582 net/http.(*Transport).RoundTrip(0xc000d38a00?, 0x7fadc80?) /usr/local/go/src/net/http/roundtrip.go:17 net/http.send(0xc000d38900, {0x7fadc80, 0xc000ca0a00}, {0x74d54e0?, 0x26b3a01?, 0xae40400?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc004ea92f0, 0xc000d38900, {0x0?, 0x262a61f?, 0xae40400?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc004ea92f0, 0xc000d38900) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 net/http.(*Client).Get(0x2?, {0xc0010b6f30?, 0x9?}) /usr/local/go/src/net/http/client.go:479 k8s.io/kubernetes/test/e2e/framework/network.httpGetNoConnectionPoolTimeout({0xc0010b6f30, 0x26}, 0x2540be400) test/e2e/framework/network/utils.go:1065 k8s.io/kubernetes/test/e2e/framework/network.PokeHTTP({0xc004af9180, 0xb}, 0x1f91, {0x75ddb6b, 0xf}, 0xc004bdbbd8?) test/e2e/framework/network/utils.go:998 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes.func1() test/e2e/framework/service/util.go:35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xae422e0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc004bdbc10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x50?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x2fdaaaa?, 0xc004bdbca0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2fdaa30?, 0x7fe0bc8?, 0xc0000820c8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004af9180, 0xb}, 0x1f91, {0xae73300, 0x0, 0x0}, 0xc00486ade8?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1404 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00457b200}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:05:40.051: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:05:42.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:05:42.092: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused Nov 25 19:05:44.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:05:54.053: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) ------------------------------ Progress Report for Ginkgo Process #3 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 6m40.65s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 6m40.013s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for service endpoint on node bootstrap-e2e-minion-group-ft5h (Step Runtime: 25.064s) test/e2e/network/loadbalancer.go:1395 Spec Goroutine goroutine 811 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003576540, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x50?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x2fdaaaa?, 0xc004bdbca0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2fdaa30?, 0x7fe0bc8?, 0xc0000820c8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004af9180, 0xb}, 0x1f91, {0xae73300, 0x0, 0x0}, 0xc00486ade8?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1404 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00457b200}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:05:56.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:06.052: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:06:06.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:06.093: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused Nov 25 19:06:08.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:08.092: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused Nov 25 19:06:10.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:10.091: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused Nov 25 19:06:12.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:12.092: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused Nov 25 19:06:14.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:14.091: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #3 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 7m0.653s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 7m0.015s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for service endpoint on node bootstrap-e2e-minion-group-ft5h (Step Runtime: 45.066s) test/e2e/network/loadbalancer.go:1395 Spec Goroutine goroutine 811 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003576540, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x50?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x2fdaaaa?, 0xc004bdbca0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2fdaa30?, 0x7fe0bc8?, 0xc0000820c8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004af9180, 0xb}, 0x1f91, {0xae73300, 0x0, 0x0}, 0xc00486ade8?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1404 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00457b200}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:06:16.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:16.092: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused Nov 25 19:06:18.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:18.091: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused Nov 25 19:06:20.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:20.093: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused Nov 25 19:06:22.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:22.092: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused Nov 25 19:06:24.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:24.092: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused Nov 25 19:06:26.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:26.092: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused Nov 25 19:06:28.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:28.091: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused Nov 25 19:06:30.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:30.092: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused Nov 25 19:06:32.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" ------------------------------ Progress Report for Ginkgo Process #3 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 7m20.655s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 7m20.017s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for service endpoint on node bootstrap-e2e-minion-group-ft5h (Step Runtime: 1m5.068s) test/e2e/network/loadbalancer.go:1395 Spec Goroutine goroutine 811 [select] net/http.(*Transport).getConn(0xc000ca0f00, 0xc004a90900, {{}, 0x0, {0xc0010b7380, 0x4}, {0xc0048ca910, 0x10}, 0x0}) /usr/local/go/src/net/http/transport.go:1376 net/http.(*Transport).roundTrip(0xc000ca0f00, 0xc000d39200) /usr/local/go/src/net/http/transport.go:582 net/http.(*Transport).RoundTrip(0xc000d39200?, 0x7fadc80?) /usr/local/go/src/net/http/roundtrip.go:17 net/http.send(0xc000d39100, {0x7fadc80, 0xc000ca0f00}, {0x74d54e0?, 0x26b3a01?, 0xae40400?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc004ea9ce0, 0xc000d39100, {0x0?, 0x262a61f?, 0xae40400?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc004ea9ce0, 0xc000d39100) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 net/http.(*Client).Get(0x2?, {0xc0010b7380?, 0x9?}) /usr/local/go/src/net/http/client.go:479 k8s.io/kubernetes/test/e2e/framework/network.httpGetNoConnectionPoolTimeout({0xc0010b7380, 0x26}, 0x2540be400) test/e2e/framework/network/utils.go:1065 k8s.io/kubernetes/test/e2e/framework/network.PokeHTTP({0xc004af9180, 0xb}, 0x1f91, {0x75ddb6b, 0xf}, 0xc004bdbb84?) test/e2e/framework/network/utils.go:998 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes.func1() test/e2e/framework/service/util.go:35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x5?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003576540, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x50?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x2fdaaaa?, 0xc004bdbca0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2fdaa30?, 0x7fe0bc8?, 0xc0000820c8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004af9180, 0xb}, 0x1f91, {0xae73300, 0x0, 0x0}, 0xc00486ade8?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1404 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00457b200}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:06:42.053: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:06:44.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:44.092: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused Nov 25 19:06:46.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:06:46.092: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": dial tcp 34.83.177.2:8081: connect: connection refused Nov 25 19:06:48.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" ------------------------------ Progress Report for Ginkgo Process #3 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 7m40.658s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 7m40.021s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for service endpoint on node bootstrap-e2e-minion-group-ft5h (Step Runtime: 1m25.072s) test/e2e/network/loadbalancer.go:1395 Spec Goroutine goroutine 811 [select, 2 minutes] net/http.(*Transport).getConn(0xc000ae5cc0, 0xc000728ec0, {{}, 0x0, {0xc00370aab0, 0x4}, {0xc0042db640, 0x10}, 0x0}) /usr/local/go/src/net/http/transport.go:1376 net/http.(*Transport).roundTrip(0xc000ae5cc0, 0xc000afa900) /usr/local/go/src/net/http/transport.go:582 net/http.(*Transport).RoundTrip(0xc000afa900?, 0x7fadc80?) /usr/local/go/src/net/http/roundtrip.go:17 net/http.send(0xc000afa800, {0x7fadc80, 0xc000ae5cc0}, {0x74d54e0?, 0x26b3a01?, 0xae40400?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc000cab680, 0xc000afa800, {0x0?, 0x262a61f?, 0xae40400?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc000cab680, 0xc000afa800) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 net/http.(*Client).Get(0x2?, {0xc00370aab0?, 0x9?}) /usr/local/go/src/net/http/client.go:479 k8s.io/kubernetes/test/e2e/framework/network.httpGetNoConnectionPoolTimeout({0xc00370aab0, 0x26}, 0x2540be400) test/e2e/framework/network/utils.go:1065 k8s.io/kubernetes/test/e2e/framework/network.PokeHTTP({0xc004af9180, 0xb}, 0x1f91, {0x75ddb6b, 0xf}, 0xc004bdbb84?) test/e2e/framework/network/utils.go:998 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes.func1() test/e2e/framework/service/util.go:35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x5?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003576540, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x50?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x2fdaaaa?, 0xc004bdbca0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2fdaa30?, 0x7fe0bc8?, 0xc0000820c8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004af9180, 0xb}, 0x1f91, {0xae73300, 0x0, 0x0}, 0xc00486ade8?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1404 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00457b200}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:06:58.052: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:06:58.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:07:08.053: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:07:10.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" ------------------------------ Progress Report for Ginkgo Process #3 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 8m0.661s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 8m0.023s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for service endpoint on node bootstrap-e2e-minion-group-ft5h (Step Runtime: 1m45.074s) test/e2e/network/loadbalancer.go:1395 Spec Goroutine goroutine 811 [select] net/http.(*Transport).getConn(0xc000efa500, 0xc000728000, {{}, 0x0, {0xc00370a090, 0x4}, {0xc004a44040, 0x10}, 0x0}) /usr/local/go/src/net/http/transport.go:1376 net/http.(*Transport).roundTrip(0xc000efa500, 0xc000340a00) /usr/local/go/src/net/http/transport.go:582 net/http.(*Transport).RoundTrip(0xc000340a00?, 0x7fadc80?) /usr/local/go/src/net/http/roundtrip.go:17 net/http.send(0xc000340100, {0x7fadc80, 0xc000efa500}, {0x74d54e0?, 0x26b3a01?, 0xae40400?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc000caa3c0, 0xc000340100, {0x0?, 0xc004bdb500?, 0xae40400?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc000caa3c0, 0xc000340100) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 net/http.(*Client).Get(0x2?, {0xc00370a090?, 0x9?}) /usr/local/go/src/net/http/client.go:479 k8s.io/kubernetes/test/e2e/framework/network.httpGetNoConnectionPoolTimeout({0xc00370a090, 0x26}, 0x2540be400) test/e2e/framework/network/utils.go:1065 k8s.io/kubernetes/test/e2e/framework/network.PokeHTTP({0xc004af9180, 0xb}, 0x1f91, {0x75ddb6b, 0xf}, 0xc004bdbb84?) test/e2e/framework/network/utils.go:998 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes.func1() test/e2e/framework/service/util.go:35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x5?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003576540, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x50?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x2fdaaaa?, 0xc004bdbca0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2fdaa30?, 0x7fe0bc8?, 0xc0000820c8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004af9180, 0xb}, 0x1f91, {0xae73300, 0x0, 0x0}, 0xc00486ade8?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1404 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00457b200}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:07:20.053: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:07:22.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:07:32.053: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:07:34.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" ------------------------------ Progress Report for Ginkgo Process #3 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 8m20.663s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 8m20.026s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for service endpoint on node bootstrap-e2e-minion-group-ft5h (Step Runtime: 2m5.077s) test/e2e/network/loadbalancer.go:1395 Spec Goroutine goroutine 811 [select] net/http.(*Transport).getConn(0xc003f94780, 0xc000728100, {{}, 0x0, {0xc00370acf0, 0x4}, {0xc004a44120, 0x10}, 0x0}) /usr/local/go/src/net/http/transport.go:1376 net/http.(*Transport).roundTrip(0xc003f94780, 0xc000341400) /usr/local/go/src/net/http/transport.go:582 net/http.(*Transport).RoundTrip(0xc000341400?, 0x7fadc80?) /usr/local/go/src/net/http/roundtrip.go:17 net/http.send(0xc000340f00, {0x7fadc80, 0xc003f94780}, {0x74d54e0?, 0x26b3a01?, 0xae40400?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc000caa720, 0xc000340f00, {0x0?, 0x262a61f?, 0xae40400?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc000caa720, 0xc000340f00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 net/http.(*Client).Get(0x2?, {0xc00370acf0?, 0x9?}) /usr/local/go/src/net/http/client.go:479 k8s.io/kubernetes/test/e2e/framework/network.httpGetNoConnectionPoolTimeout({0xc00370acf0, 0x26}, 0x2540be400) test/e2e/framework/network/utils.go:1065 k8s.io/kubernetes/test/e2e/framework/network.PokeHTTP({0xc004af9180, 0xb}, 0x1f91, {0x75ddb6b, 0xf}, 0xc004bdbb84?) test/e2e/framework/network/utils.go:998 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes.func1() test/e2e/framework/service/util.go:35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x5?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003576540, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x50?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x2fdaaaa?, 0xc004bdbca0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2fdaa30?, 0x7fe0bc8?, 0xc0000820c8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004af9180, 0xb}, 0x1f91, {0xae73300, 0x0, 0x0}, 0xc00486ade8?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1404 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00457b200}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:07:44.054: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:07:46.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" ------------------------------ Progress Report for Ginkgo Process #3 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 8m40.666s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 8m40.029s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for service endpoint on node bootstrap-e2e-minion-group-ft5h (Step Runtime: 2m25.08s) test/e2e/network/loadbalancer.go:1395 Spec Goroutine goroutine 811 [select] net/http.(*Transport).getConn(0xc0017ebe00, 0xc004992140, {{}, 0x0, {0xc000bb60c0, 0x4}, {0xc0042da0a0, 0x10}, 0x0}) /usr/local/go/src/net/http/transport.go:1376 net/http.(*Transport).roundTrip(0xc0017ebe00, 0xc000afa500) /usr/local/go/src/net/http/transport.go:582 net/http.(*Transport).RoundTrip(0xc000afa500?, 0x7fadc80?) /usr/local/go/src/net/http/roundtrip.go:17 net/http.send(0xc000afa400, {0x7fadc80, 0xc0017ebe00}, {0x74d54e0?, 0x26b3a01?, 0xae40400?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc00072a570, 0xc000afa400, {0x0?, 0x262a61f?, 0xae40400?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc00072a570, 0xc000afa400) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 net/http.(*Client).Get(0x2?, {0xc000bb60c0?, 0x9?}) /usr/local/go/src/net/http/client.go:479 k8s.io/kubernetes/test/e2e/framework/network.httpGetNoConnectionPoolTimeout({0xc000bb60c0, 0x26}, 0x2540be400) test/e2e/framework/network/utils.go:1065 k8s.io/kubernetes/test/e2e/framework/network.PokeHTTP({0xc004af9180, 0xb}, 0x1f91, {0x75ddb6b, 0xf}, 0xc004bdbb84?) test/e2e/framework/network/utils.go:998 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes.func1() test/e2e/framework/service/util.go:35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x5?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003576540, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x50?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x2fdaaaa?, 0xc004bdbca0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2fdaa30?, 0x7fe0bc8?, 0xc0000820c8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004af9180, 0xb}, 0x1f91, {0xae73300, 0x0, 0x0}, 0xc00486ade8?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1404 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00457b200}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:07:56.053: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): Get "http://34.83.177.2:8081/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:07:58.052: INFO: Poking "http://34.83.177.2:8081/echo?msg=hello" Nov 25 19:07:58.133: INFO: Poke("http://34.83.177.2:8081/echo?msg=hello"): success Nov 25 19:07:58.133: INFO: Health checking bootstrap-e2e-minion-group-ft5h, http://10.138.0.3:30244/healthz, expectedSuccess true Nov 25 19:07:58.267: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:07:58.267: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:07:58.268: INFO: ExecWithOptions: Clientset creation Nov 25 19:07:58.268: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:07:58.751: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 200 Nov 25 19:07:59.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:07:59.792: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:07:59.794: INFO: ExecWithOptions: Clientset creation Nov 25 19:07:59.794: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:00.252: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:00.793: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:00.793: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:00.794: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:00.795: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:01.636: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:01.796: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:01.796: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:01.798: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:01.798: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:02.209: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:02.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:02.792: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:02.793: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:02.793: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:03.262: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:03.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:03.792: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:03.794: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:03.794: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:04.207: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:04.793: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:04.793: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:04.795: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:04.795: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:05.324: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:05.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:05.792: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:05.793: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:05.793: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:06.156: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:06.794: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:06.794: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:06.795: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:06.796: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:07.168: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:07.793: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:07.793: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:07.795: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:07.795: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:08.106: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:08.793: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:08.793: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:08.794: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:08.794: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:09.099: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:09.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:09.792: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:09.793: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:09.793: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:10.103: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:10.793: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:10.793: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:10.794: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:10.794: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:11.109: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:11.793: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:11.793: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:11.794: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:11.794: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:12.145: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:12.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:12.792: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:12.793: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:12.793: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:13.107: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:13.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:13.792: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:13.794: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:13.794: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:14.480: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:14.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:14.792: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:14.794: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:14.794: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) ------------------------------ Progress Report for Ginkgo Process #3 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 9m0.668s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 9m0.031s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for service endpoint on node bootstrap-e2e-minion-group-ft5h (Step Runtime: 2m45.082s) test/e2e/network/loadbalancer.go:1395 Spec Goroutine goroutine 811 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/remotecommand.(*streamExecutor).StreamWithContext(0x0?, {0x7fe0bc8, 0xc0000820c8}, {{0x0, 0x0}, {0x7fa3a80, 0xc0035eb6b0}, {0x7fa3a80, 0xc0035eb6e0}, 0x0, ...}) vendor/k8s.io/client-go/tools/remotecommand/remotecommand.go:174 k8s.io/kubernetes/test/e2e/framework/pod.execute({0x75b70c2?, 0x21?}, 0xc004bdb6f8?, 0x1?, {0x0, 0x0}, {0x7fa3a80, 0xc0035eb6b0}, {0x7fa3a80, 0xc0035eb6e0}, ...) test/e2e/framework/pod/exec_util.go:146 k8s.io/kubernetes/test/e2e/framework/pod.ExecWithOptions(0xc0011fa000, {{0xc0035ea270, 0x3, 0x3}, {0xc00485c520, 0xa}, {0xc003ebeff0, 0x12}, {0xc001b4b2d0, 0x9}, ...}) test/e2e/framework/pod/exec_util.go:80 k8s.io/kubernetes/test/e2e/framework/pod.ExecCommandInContainerWithFullOutput(0xc000ed0fc0?, {0xc003ebeff0?, 0xae73300?}, {0xc001b4b2d0?, 0x0?}, {0xc0035ea270?, 0xc0009f7640?, 0x7fe0bc8?}) test/e2e/framework/pod/exec_util.go:90 k8s.io/kubernetes/test/e2e/framework/pod.execCommandInPodWithFullOutput(0x771f0dc?, {0xc003ebeff0, 0x12}, {0xc0035ea270, 0x3, 0x3}) test/e2e/framework/pod/exec_util.go:128 k8s.io/kubernetes/test/e2e/framework/pod.ExecShellInPodWithFullOutput(...) test/e2e/framework/pod/exec_util.go:138 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetHTTPCodeFromTestContainer(0xc004822000, {0x75c17ef, 0x8}, {0xc004a8a070?, 0x1?}, 0x3?) test/e2e/framework/network/utils.go:420 > k8s.io/kubernetes/test/e2e/network.testHTTPHealthCheckNodePortFromTestContainer.func1() test/e2e/network/service.go:689 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00078ab28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x38?, 0x2fd9d05?, 0x38?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x7fa7740?, 0xc004bdbc88?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0048ca150?, 0xc0048ca15b?, 0xc004bdbd10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testHTTPHealthCheckNodePortFromTestContainer(0xc004822000, {0xc004a8a070, 0xa}, 0x7624, 0xc004bdbf00?, 0x1, 0x2) test/e2e/network/service.go:705 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1409 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00457b200}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:08:15.307: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:15.793: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:15.793: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:15.794: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:15.794: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:16.203: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:16.795: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:16.795: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:16.796: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:16.796: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:17.209: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:17.793: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:17.793: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:17.795: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:17.795: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:18.116: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:18.793: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:18.793: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:18.795: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:18.795: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:19.122: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:19.793: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:19.793: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:19.795: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:19.795: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:20.122: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:20.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:20.792: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:20.794: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:20.794: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:21.103: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:21.794: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:21.794: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:21.795: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:21.795: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:22.125: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:22.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:22.792: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:22.794: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:22.794: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:23.232: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:23.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:23.792: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:23.794: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:23.794: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:24.124: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:24.793: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:24.793: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:24.794: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:24.795: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:25.123: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:25.796: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:25.796: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:25.797: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:25.797: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:26.172: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:26.801: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:26.801: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:26.802: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:26.802: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:27.119: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:27.793: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:27.793: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:27.794: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:27.794: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:28.102: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:28.795: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:28.795: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:28.796: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:28.796: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:29.122: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:29.163: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:30244/healthz] Namespace:esipp-2809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 19:08:29.164: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:08:29.165: INFO: ExecWithOptions: Clientset creation Nov 25 19:08:29.165: INFO: ExecWithOptions: execute(POST https://104.198.13.163/api/v1/namespaces/esipp-2809/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A30244%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 25 19:08:29.473: INFO: Got status code from http://10.138.0.3:30244/healthz via test container: 0 Nov 25 19:08:29.473: INFO: Unexpected error: <*errors.errorString | 0xc001508490>: { s: "error waiting for healthCheckNodePort: expected at least 2 succeed=true on 10.138.0.3:30244/healthz, got 1", } Nov 25 19:08:29.473: FAIL: error waiting for healthCheckNodePort: expected at least 2 succeed=true on 10.138.0.3:30244/healthz, got 1 Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1416 +0x9a8 Nov 25 19:08:29.568: INFO: Waiting up to 15m0s for service "external-local-nodes" to have no LoadBalancer ------------------------------ Progress Report for Ginkgo Process #3 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 9m20.671s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 9m20.034s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for service endpoint on node bootstrap-e2e-minion-group-ft5h (Step Runtime: 3m5.085s) test/e2e/network/loadbalancer.go:1395 Spec Goroutine goroutine 811 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0035765d0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x18?, 0x2fd9d05?, 0x48?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x4?, 0xc004bdb668?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x7fff3271f4fd?, 0xa?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/providers/gce.(*Provider).EnsureLoadBalancerResourcesDeleted(0xc000edef48, {0xc00350c670, 0xb}, {0xc004af8318, 0x4}) test/e2e/framework/providers/gce/gce.go:195 k8s.io/kubernetes/test/e2e/framework.EnsureLoadBalancerResourcesDeleted(...) test/e2e/framework/util.go:551 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancerDestroy.func1() test/e2e/framework/service/jig.go:602 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancerDestroy(0xc004459d60, {0xc00350c670?, 0x0?}, 0x0?, 0x0?) test/e2e/framework/service/jig.go:614 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).ChangeServiceType(0x0?, {0x75c5095?, 0x0?}, 0x0?) test/e2e/framework/service/jig.go:186 > k8s.io/kubernetes/test/e2e/network.glob..func20.5.2() test/e2e/network/loadbalancer.go:1365 panic({0x70eb7e0, 0xc000b585b0}) /usr/local/go/src/runtime/panic.go:884 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc00130a300, 0x7f}, {0xc004bdbc40?, 0x75b521a?, 0xc004bdbc60?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00072e000, 0x6a}, {0xc004bdbcd8?, 0xc00072e000?, 0xc004bdbd00?}) test/e2e/framework/log.go:61 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3ee0, 0xc001508490}, {0x0?, 0xc004bdbf00?, 0x1?}) test/e2e/framework/expect.go:76 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1416 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc00457b200}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 19:08:39.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 19:08:40.034: INFO: Output of kubectl describe svc: Nov 25 19:08:40.034: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=esipp-2809 describe svc --namespace=esipp-2809' Nov 25 19:08:40.774: INFO: stderr: "" Nov 25 19:08:40.774: INFO: stdout: "Name: external-local-nodes\nNamespace: esipp-2809\nLabels: testid=external-local-nodes-d2aac449-3f6e-42a0-bd6a-c43466bd8430\nAnnotations: <none>\nSelector: testid=external-local-nodes-d2aac449-3f6e-42a0-bd6a-c43466bd8430\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.114.114\nIPs: 10.0.114.114\nPort: <unset> 8081/TCP\nTargetPort: 80/TCP\nEndpoints: 10.64.1.158:80\nSession Affinity: None\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 5m28s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 4m52s service-controller Ensured load balancer\n Normal Type 11s service-controller LoadBalancer -> ClusterIP\n\n\nName: node-port-service\nNamespace: esipp-2809\nLabels: <none>\nAnnotations: <none>\nSelector: selector-3f8a842f-ca25-4293-9f70-cd253d8935eb=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.211.54\nIPs: 10.0.211.54\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 31028/TCP\nEndpoints: 10.64.1.96:8083\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 30677/UDP\nEndpoints: 10.64.1.96:8081\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents: <none>\n\n\nName: session-affinity-service\nNamespace: esipp-2809\nLabels: <none>\nAnnotations: <none>\nSelector: selector-3f8a842f-ca25-4293-9f70-cd253d8935eb=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.190.99\nIPs: 10.0.190.99\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 31388/TCP\nEndpoints: 10.64.1.96:8083\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 32743/UDP\nEndpoints: 10.64.1.96:8081\nSession Affinity: ClientIP\nExternal Traffic Policy: Cluster\nEvents: <none>\n" Nov 25 19:08:40.774: INFO: Name: external-local-nodes Namespace: esipp-2809 Labels: testid=external-local-nodes-d2aac449-3f6e-42a0-bd6a-c43466bd8430 Annotations: <none> Selector: testid=external-local-nodes-d2aac449-3f6e-42a0-bd6a-c43466bd8430 Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.114.114 IPs: 10.0.114.114 Port: <unset> 8081/TCP TargetPort: 80/TCP Endpoints: 10.64.1.158:80 Session Affinity: None Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 5m28s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 4m52s service-controller Ensured load balancer Normal Type 11s service-controller LoadBalancer -> ClusterIP Name: node-port-service Namespace: esipp-2809 Labels: <none> Annotations: <none> Selector: selector-3f8a842f-ca25-4293-9f70-cd253d8935eb=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.211.54 IPs: 10.0.211.54 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 31028/TCP Endpoints: 10.64.1.96:8083 Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 30677/UDP Endpoints: 10.64.1.96:8081 Session Affinity: None External Traffic Policy: Cluster Events: <none> Name: session-affinity-service Namespace: esipp-2809 Labels: <none> Annotations: <none> Selector: selector-3f8a842f-ca25-4293-9f70-cd253d8935eb=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.190.99 IPs: 10.0.190.99 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 31388/TCP Endpoints: 10.64.1.96:8083 Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 32743/UDP Endpoints: 10.64.1.96:8081 Session Affinity: ClientIP External Traffic Policy: Cluster Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 19:08:40.774 STEP: Collecting events from namespace "esipp-2809". 11/25/22 19:08:40.775 STEP: Found 36 events. 11/25/22 19:08:40.819 Nov 25 19:08:40.819: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-0: { } Scheduled: Successfully assigned esipp-2809/netserver-0 to bootstrap-e2e-minion-group-ft5h Nov 25 19:08:40.819: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-1: { } Scheduled: Successfully assigned esipp-2809/netserver-1 to bootstrap-e2e-minion-group-p8wv Nov 25 19:08:40.819: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-2: { } Scheduled: Successfully assigned esipp-2809/netserver-2 to bootstrap-e2e-minion-group-rvwg Nov 25 19:08:40.819: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-container-pod: { } Scheduled: Successfully assigned esipp-2809/test-container-pod to bootstrap-e2e-minion-group-ft5h Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:12 +0000 UTC - event for external-local-nodes: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:48 +0000 UTC - event for external-local-nodes: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:50 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-p8wv} Created: Created container webserver Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:50 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-p8wv} Started: Started container webserver Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:50 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-p8wv} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:51 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} Started: Started container webserver Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:51 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:51 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} Created: Created container webserver Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:51 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-p8wv} Killing: Stopping container webserver Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:51 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:51 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} Created: Created container webserver Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:51 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} Started: Started container webserver Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:52 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} Killing: Stopping container webserver Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:52 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-p8wv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:52 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} Killing: Stopping container webserver Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:53 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:53 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:54 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-p8wv} BackOff: Back-off restarting failed container webserver in pod netserver-1_esipp-2809(b491d1f7-0df0-4048-ae42-6715fa10e70f) Nov 25 19:08:40.819: INFO: At 2022-11-25 19:03:57 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} BackOff: Back-off restarting failed container webserver in pod netserver-2_esipp-2809(b617e063-6457-4851-af50-e64e6c6eba15) Nov 25 19:08:40.819: INFO: At 2022-11-25 19:05:13 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-ft5h} Started: Started container webserver Nov 25 19:08:40.819: INFO: At 2022-11-25 19:05:13 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-ft5h} Created: Created container webserver Nov 25 19:08:40.819: INFO: At 2022-11-25 19:05:13 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-ft5h} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:08:40.819: INFO: At 2022-11-25 19:05:15 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-ft5h} Killing: Stopping container webserver Nov 25 19:08:40.819: INFO: At 2022-11-25 19:05:16 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-ft5h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:08:40.819: INFO: At 2022-11-25 19:05:23 +0000 UTC - event for external-local-nodes: {replication-controller } SuccessfulCreate: Created pod: external-local-nodes-98744 Nov 25 19:08:40.819: INFO: At 2022-11-25 19:05:24 +0000 UTC - event for external-local-nodes-98744: {kubelet bootstrap-e2e-minion-group-ft5h} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:08:40.819: INFO: At 2022-11-25 19:05:24 +0000 UTC - event for external-local-nodes-98744: {kubelet bootstrap-e2e-minion-group-ft5h} Created: Created container netexec Nov 25 19:08:40.819: INFO: At 2022-11-25 19:05:24 +0000 UTC - event for external-local-nodes-98744: {kubelet bootstrap-e2e-minion-group-ft5h} Started: Started container netexec Nov 25 19:08:40.819: INFO: At 2022-11-25 19:06:30 +0000 UTC - event for external-local-nodes-98744: {kubelet bootstrap-e2e-minion-group-ft5h} Killing: Stopping container netexec Nov 25 19:08:40.819: INFO: At 2022-11-25 19:06:32 +0000 UTC - event for external-local-nodes-98744: {kubelet bootstrap-e2e-minion-group-ft5h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:08:40.819: INFO: At 2022-11-25 19:06:35 +0000 UTC - event for external-local-nodes-98744: {kubelet bootstrap-e2e-minion-group-ft5h} BackOff: Back-off restarting failed container netexec in pod external-local-nodes-98744_esipp-2809(34a1d827-495d-4b2d-9e28-98f2ae530240) Nov 25 19:08:40.819: INFO: At 2022-11-25 19:08:29 +0000 UTC - event for external-local-nodes: {service-controller } Type: LoadBalancer -> ClusterIP Nov 25 19:08:40.873: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 19:08:40.873: INFO: external-local-nodes-98744 bootstrap-e2e-minion-group-ft5h Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:05:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:06:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:06:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:05:23 +0000 UTC }] Nov 25 19:08:40.873: INFO: netserver-0 bootstrap-e2e-minion-group-ft5h Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:49 +0000 UTC }] Nov 25 19:08:40.873: INFO: netserver-1 bootstrap-e2e-minion-group-p8wv Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:08:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:08:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:49 +0000 UTC }] Nov 25 19:08:40.874: INFO: netserver-2 bootstrap-e2e-minion-group-rvwg Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:08:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:08:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:49 +0000 UTC }] Nov 25 19:08:40.874: INFO: test-container-pod bootstrap-e2e-minion-group-ft5h Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:05:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:05:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:05:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:05:12 +0000 UTC }] Nov 25 19:08:40.874: INFO: Nov 25 19:08:41.360: INFO: Logging node info for node bootstrap-e2e-master Nov 25 19:08:41.435: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 063993ed-280a-4252-9455-5251e2ae7f68 7934 0 2022-11-25 18:55:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 19:06:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:06:37 +0000 UTC,LastTransitionTime:2022-11-25 18:55:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:06:37 +0000 UTC,LastTransitionTime:2022-11-25 18:55:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:06:37 +0000 UTC,LastTransitionTime:2022-11-25 18:55:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:06:37 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:104.198.13.163,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:baec19a097c341ae8d14b5ee519a12bc,SystemUUID:baec19a0-97c3-41ae-8d14-b5ee519a12bc,BootID:cbb52bbc-4a45-4571-8271-7b01e70f9d0d,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:08:41.436: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 19:08:41.539: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 19:08:41.653: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:41.653: INFO: Container etcd-container ready: true, restart count 3 Nov 25 19:08:41.653: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:41.653: INFO: Container kube-controller-manager ready: true, restart count 5 Nov 25 19:08:41.653: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-25 18:55:06 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:41.653: INFO: Container l7-lb-controller ready: false, restart count 6 Nov 25 19:08:41.653: INFO: metadata-proxy-v0.1-sd5zx started at 2022-11-25 18:55:32 +0000 UTC (0+2 container statuses recorded) Nov 25 19:08:41.653: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:08:41.653: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:08:41.653: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:41.653: INFO: Container etcd-container ready: true, restart count 3 Nov 25 19:08:41.653: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:41.653: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 25 19:08:41.653: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:41.653: INFO: Container kube-apiserver ready: true, restart count 1 Nov 25 19:08:41.653: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:41.653: INFO: Container kube-scheduler ready: true, restart count 4 Nov 25 19:08:41.653: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-25 18:55:06 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:41.653: INFO: Container kube-addon-manager ready: true, restart count 1 Nov 25 19:08:41.978: INFO: Latency metrics for node bootstrap-e2e-master Nov 25 19:08:41.978: INFO: Logging node info for node bootstrap-e2e-minion-group-ft5h Nov 25 19:08:42.069: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ft5h f6d0c520-a72a-4938-9464-c37052e3eead 8661 0 2022-11-25 18:55:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ft5h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-ft5h topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-5792":"bootstrap-e2e-minion-group-ft5h","csi-hostpath-provisioning-4022":"bootstrap-e2e-minion-group-ft5h","csi-hostpath-provisioning-8147":"bootstrap-e2e-minion-group-ft5h","csi-mock-csi-mock-volumes-4396":"csi-mock-csi-mock-volumes-4396","csi-mock-csi-mock-volumes-4834":"bootstrap-e2e-minion-group-ft5h","csi-mock-csi-mock-volumes-9297":"bootstrap-e2e-minion-group-ft5h","csi-mock-csi-mock-volumes-9449":"csi-mock-csi-mock-volumes-9449"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 19:05:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 19:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2022-11-25 19:08:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-minion-group-ft5h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:08:32 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:08:32 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:08:32 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:08:32 +0000 UTC,LastTransitionTime:2022-11-25 18:55:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.110.94,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ft5h.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ft5h.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b840d0c08d09e2ff89c00115dd74e373,SystemUUID:b840d0c0-8d09-e2ff-89c0-0115dd74e373,BootID:2c546fd1-5b2e-4c92-9b03-025eb8882457,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-8147^25a6945e-6cf4-11ed-85c7-f6a638f4faa2],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-8147^25a6945e-6cf4-11ed-85c7-f6a638f4faa2,DevicePath:,},},Config:nil,},} Nov 25 19:08:42.070: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ft5h Nov 25 19:08:42.169: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ft5h Nov 25 19:08:42.346: INFO: forbid-27823384-rmpz4 started at 2022-11-25 19:04:00 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container c ready: true, restart count 1 Nov 25 19:08:42.346: INFO: metrics-server-v0.5.2-867b8754b9-565bg started at 2022-11-25 18:57:46 +0000 UTC (0+2 container statuses recorded) Nov 25 19:08:42.346: INFO: Container metrics-server ready: false, restart count 5 Nov 25 19:08:42.346: INFO: Container metrics-server-nanny ready: false, restart count 6 Nov 25 19:08:42.346: INFO: pvc-volume-tester-8ldh9 started at 2022-11-25 19:03:06 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container volume-tester ready: false, restart count 0 Nov 25 19:08:42.346: INFO: inclusterclient started at 2022-11-25 19:03:46 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container inclusterclient ready: false, restart count 0 Nov 25 19:08:42.346: INFO: kube-proxy-bootstrap-e2e-minion-group-ft5h started at 2022-11-25 18:55:34 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container kube-proxy ready: false, restart count 5 Nov 25 19:08:42.346: INFO: metadata-proxy-v0.1-9vhzj started at 2022-11-25 18:55:34 +0000 UTC (0+2 container statuses recorded) Nov 25 19:08:42.346: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:08:42.346: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:08:42.346: INFO: netserver-0 started at 2022-11-25 18:59:13 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container webserver ready: true, restart count 2 Nov 25 19:08:42.346: INFO: hostexec-bootstrap-e2e-minion-group-ft5h-rz6vn started at 2022-11-25 18:59:31 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 19:08:42.346: INFO: hostexec-bootstrap-e2e-minion-group-ft5h-ln9vd started at 2022-11-25 18:59:33 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container agnhost-container ready: true, restart count 3 Nov 25 19:08:42.346: INFO: pod-subpath-test-preprovisionedpv-9fvn started at 2022-11-25 19:07:54 +0000 UTC (1+2 container statuses recorded) Nov 25 19:08:42.346: INFO: Init container init-volume-preprovisionedpv-9fvn ready: true, restart count 1 Nov 25 19:08:42.346: INFO: Container test-container-subpath-preprovisionedpv-9fvn ready: true, restart count 1 Nov 25 19:08:42.346: INFO: Container test-container-volume-preprovisionedpv-9fvn ready: true, restart count 1 Nov 25 19:08:42.346: INFO: konnectivity-agent-qf52c started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container konnectivity-agent ready: true, restart count 5 Nov 25 19:08:42.346: INFO: pvc-volume-tester-c2f6h started at 2022-11-25 18:59:40 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container volume-tester ready: false, restart count 0 Nov 25 19:08:42.346: INFO: net-tiers-svc-pnq45 started at 2022-11-25 19:02:25 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container netexec ready: false, restart count 3 Nov 25 19:08:42.346: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:05:39 +0000 UTC (0+7 container statuses recorded) Nov 25 19:08:42.346: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 19:08:42.346: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 19:08:42.346: INFO: Container csi-resizer ready: true, restart count 0 Nov 25 19:08:42.346: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 25 19:08:42.346: INFO: Container hostpath ready: true, restart count 0 Nov 25 19:08:42.346: INFO: Container liveness-probe ready: true, restart count 0 Nov 25 19:08:42.346: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 25 19:08:42.346: INFO: csi-mockplugin-0 started at 2022-11-25 19:04:41 +0000 UTC (0+4 container statuses recorded) Nov 25 19:08:42.346: INFO: Container busybox ready: true, restart count 1 Nov 25 19:08:42.346: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 19:08:42.346: INFO: Container driver-registrar ready: true, restart count 1 Nov 25 19:08:42.346: INFO: Container mock ready: true, restart count 1 Nov 25 19:08:42.346: INFO: test-container-pod started at 2022-11-25 19:05:12 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container webserver ready: true, restart count 1 Nov 25 19:08:42.346: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:05:43 +0000 UTC (0+7 container statuses recorded) Nov 25 19:08:42.346: INFO: Container csi-attacher ready: true, restart count 2 Nov 25 19:08:42.346: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 19:08:42.346: INFO: Container csi-resizer ready: true, restart count 2 Nov 25 19:08:42.346: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 25 19:08:42.346: INFO: Container hostpath ready: true, restart count 2 Nov 25 19:08:42.346: INFO: Container liveness-probe ready: true, restart count 2 Nov 25 19:08:42.346: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 25 19:08:42.346: INFO: netserver-0 started at 2022-11-25 19:05:50 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container webserver ready: true, restart count 2 Nov 25 19:08:42.346: INFO: csi-mockplugin-0 started at 2022-11-25 18:58:23 +0000 UTC (0+4 container statuses recorded) Nov 25 19:08:42.346: INFO: Container busybox ready: true, restart count 1 Nov 25 19:08:42.346: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 19:08:42.346: INFO: Container driver-registrar ready: true, restart count 1 Nov 25 19:08:42.346: INFO: Container mock ready: true, restart count 1 Nov 25 19:08:42.346: INFO: back-off-cap started at 2022-11-25 19:04:55 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container back-off-cap ready: false, restart count 5 Nov 25 19:08:42.346: INFO: test-hostpath-type-mzwc9 started at 2022-11-25 19:05:16 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 19:08:42.346: INFO: emptydir-io-client started at 2022-11-25 18:59:40 +0000 UTC (1+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Init container emptydir-io-init ready: true, restart count 0 Nov 25 19:08:42.346: INFO: Container emptydir-io-client ready: false, restart count 0 Nov 25 19:08:42.346: INFO: hostpath-symlink-prep-provisioning-1956 started at 2022-11-25 18:59:47 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container init-volume-provisioning-1956 ready: false, restart count 0 Nov 25 19:08:42.346: INFO: hostexec-bootstrap-e2e-minion-group-ft5h-t5gqq started at 2022-11-25 19:05:34 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 19:08:42.346: INFO: netserver-0 started at 2022-11-25 19:05:34 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container webserver ready: true, restart count 0 Nov 25 19:08:42.346: INFO: test-container-pod started at 2022-11-25 19:07:08 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container webserver ready: true, restart count 0 Nov 25 19:08:42.346: INFO: host-test-container-pod started at 2022-11-25 19:07:08 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 19:08:42.346: INFO: nfs-server started at 2022-11-25 19:07:54 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container nfs-server ready: true, restart count 0 Nov 25 19:08:42.346: INFO: csi-mockplugin-0 started at 2022-11-25 19:02:58 +0000 UTC (0+3 container statuses recorded) Nov 25 19:08:42.346: INFO: Container csi-provisioner ready: true, restart count 3 Nov 25 19:08:42.346: INFO: Container driver-registrar ready: true, restart count 3 Nov 25 19:08:42.346: INFO: Container mock ready: true, restart count 3 Nov 25 19:08:42.346: INFO: external-local-nodes-98744 started at 2022-11-25 19:05:23 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container netexec ready: true, restart count 2 Nov 25 19:08:42.346: INFO: test-container-pod started at 2022-11-25 19:07:08 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container webserver ready: true, restart count 0 Nov 25 19:08:42.346: INFO: netserver-0 started at 2022-11-25 19:03:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container webserver ready: true, restart count 1 Nov 25 19:08:42.346: INFO: netserver-0 started at 2022-11-25 19:07:53 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container webserver ready: false, restart count 2 Nov 25 19:08:42.346: INFO: test-hostpath-type-7djl8 started at 2022-11-25 18:59:38 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 19:08:42.346: INFO: pod-subpath-test-dynamicpv-nplw started at 2022-11-25 19:05:39 +0000 UTC (1+2 container statuses recorded) Nov 25 19:08:42.346: INFO: Init container init-volume-dynamicpv-nplw ready: true, restart count 0 Nov 25 19:08:42.346: INFO: Container test-container-subpath-dynamicpv-nplw ready: false, restart count 3 Nov 25 19:08:42.346: INFO: Container test-container-volume-dynamicpv-nplw ready: false, restart count 0 Nov 25 19:08:42.346: INFO: hostexec-bootstrap-e2e-minion-group-ft5h-6jwrb started at 2022-11-25 19:05:44 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.346: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 19:08:42.346: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:05:32 +0000 UTC (0+7 container statuses recorded) Nov 25 19:08:42.346: INFO: Container csi-attacher ready: true, restart count 2 Nov 25 19:08:42.347: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 19:08:42.347: INFO: Container csi-resizer ready: true, restart count 2 Nov 25 19:08:42.347: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 25 19:08:42.347: INFO: Container hostpath ready: true, restart count 2 Nov 25 19:08:42.347: INFO: Container liveness-probe ready: true, restart count 2 Nov 25 19:08:42.347: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 25 19:08:42.347: INFO: external-local-lb-rfz4b started at 2022-11-25 19:04:25 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.347: INFO: Container netexec ready: true, restart count 2 Nov 25 19:08:42.347: INFO: forbid-27823388-cpmvj started at 2022-11-25 19:08:34 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.347: INFO: Container c ready: true, restart count 0 Nov 25 19:08:42.347: INFO: csi-mockplugin-0 started at 2022-11-25 19:04:29 +0000 UTC (0+3 container statuses recorded) Nov 25 19:08:42.347: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 19:08:42.347: INFO: Container driver-registrar ready: true, restart count 1 Nov 25 19:08:42.347: INFO: Container mock ready: true, restart count 1 Nov 25 19:08:42.347: INFO: csi-mockplugin-attacher-0 started at 2022-11-25 19:04:29 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:42.347: INFO: Container csi-attacher ready: true, restart count 3 Nov 25 19:08:42.932: INFO: Latency metrics for node bootstrap-e2e-minion-group-ft5h Nov 25 19:08:42.932: INFO: Logging node info for node bootstrap-e2e-minion-group-p8wv Nov 25 19:08:43.003: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-p8wv 881c9872-bf9e-40c3-a0e6-f3f276af90f5 8841 0 2022-11-25 18:55:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-p8wv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-p8wv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-6105":"bootstrap-e2e-minion-group-p8wv","csi-hostpath-multivolume-8134":"bootstrap-e2e-minion-group-p8wv","csi-hostpath-multivolume-8995":"bootstrap-e2e-minion-group-p8wv","csi-mock-csi-mock-volumes-405":"csi-mock-csi-mock-volumes-405"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 19:04:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-25 19:05:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 19:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-minion-group-p8wv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:08:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:08:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:08:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:08:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.198.109.246,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-p8wv.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-p8wv.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f8ac2f5b7ee9732248672e4b22a9ad9,SystemUUID:7f8ac2f5-b7ee-9732-2486-72e4b22a9ad9,BootID:ba8ee318-1295-42a5-a59a-f3bfc254bc58,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:08:43.004: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-p8wv Nov 25 19:08:43.073: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-p8wv Nov 25 19:08:43.262: INFO: metadata-proxy-v0.1-zw9qm started at 2022-11-25 18:55:37 +0000 UTC (0+2 container statuses recorded) Nov 25 19:08:43.262: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:08:43.262: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:08:43.262: INFO: hostexec-bootstrap-e2e-minion-group-p8wv-bds6n started at 2022-11-25 18:58:40 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 19:08:43.262: INFO: external-local-update-gqvff started at 2022-11-25 18:59:02 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container netexec ready: true, restart count 4 Nov 25 19:08:43.262: INFO: pod-8bbfb4d6-d93f-46c7-bf6b-1853fc9cc35b started at 2022-11-25 18:59:01 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:08:43.262: INFO: netserver-1 started at 2022-11-25 19:05:50 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container webserver ready: true, restart count 2 Nov 25 19:08:43.262: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:04:45 +0000 UTC (0+7 container statuses recorded) Nov 25 19:08:43.262: INFO: Container csi-attacher ready: false, restart count 1 Nov 25 19:08:43.262: INFO: Container csi-provisioner ready: false, restart count 1 Nov 25 19:08:43.262: INFO: Container csi-resizer ready: false, restart count 1 Nov 25 19:08:43.262: INFO: Container csi-snapshotter ready: false, restart count 1 Nov 25 19:08:43.262: INFO: Container hostpath ready: false, restart count 1 Nov 25 19:08:43.262: INFO: Container liveness-probe ready: false, restart count 1 Nov 25 19:08:43.262: INFO: Container node-driver-registrar ready: false, restart count 1 Nov 25 19:08:43.262: INFO: csi-mockplugin-0 started at 2022-11-25 19:07:58 +0000 UTC (0+4 container statuses recorded) Nov 25 19:08:43.262: INFO: Container busybox ready: true, restart count 0 Nov 25 19:08:43.262: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 19:08:43.262: INFO: Container driver-registrar ready: true, restart count 0 Nov 25 19:08:43.262: INFO: Container mock ready: true, restart count 0 Nov 25 19:08:43.262: INFO: netserver-1 started at 2022-11-25 19:05:34 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container webserver ready: true, restart count 1 Nov 25 19:08:43.262: INFO: netserver-1 started at 2022-11-25 19:07:53 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container webserver ready: true, restart count 1 Nov 25 19:08:43.262: INFO: hostexec-bootstrap-e2e-minion-group-p8wv-ksk87 started at 2022-11-25 18:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container agnhost-container ready: true, restart count 5 Nov 25 19:08:43.262: INFO: kube-proxy-bootstrap-e2e-minion-group-p8wv started at 2022-11-25 18:55:36 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container kube-proxy ready: true, restart count 6 Nov 25 19:08:43.262: INFO: volume-prep-provisioning-9094 started at 2022-11-25 18:59:47 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container init-volume-provisioning-9094 ready: false, restart count 0 Nov 25 19:08:43.262: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:03:53 +0000 UTC (0+7 container statuses recorded) Nov 25 19:08:43.262: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 19:08:43.262: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 19:08:43.262: INFO: Container csi-resizer ready: true, restart count 0 Nov 25 19:08:43.262: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 25 19:08:43.262: INFO: Container hostpath ready: true, restart count 0 Nov 25 19:08:43.262: INFO: Container liveness-probe ready: true, restart count 0 Nov 25 19:08:43.262: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 25 19:08:43.262: INFO: coredns-6d97d5ddb-wrz6b started at 2022-11-25 18:55:55 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container coredns ready: false, restart count 6 Nov 25 19:08:43.262: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:02:27 +0000 UTC (0+7 container statuses recorded) Nov 25 19:08:43.262: INFO: Container csi-attacher ready: true, restart count 2 Nov 25 19:08:43.262: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 19:08:43.262: INFO: Container csi-resizer ready: true, restart count 2 Nov 25 19:08:43.262: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 25 19:08:43.262: INFO: Container hostpath ready: true, restart count 2 Nov 25 19:08:43.262: INFO: Container liveness-probe ready: true, restart count 2 Nov 25 19:08:43.262: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 25 19:08:43.262: INFO: httpd started at 2022-11-25 18:58:21 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container httpd ready: false, restart count 6 Nov 25 19:08:43.262: INFO: netserver-1 started at 2022-11-25 19:03:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container webserver ready: false, restart count 3 Nov 25 19:08:43.262: INFO: pod-configmaps-7e353c13-950c-4423-9808-a6f4226d3913 started at 2022-11-25 18:59:11 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 19:08:43.262: INFO: netserver-1 started at 2022-11-25 18:59:13 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container webserver ready: false, restart count 6 Nov 25 19:08:43.262: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:59:20 +0000 UTC (0+7 container statuses recorded) Nov 25 19:08:43.262: INFO: Container csi-attacher ready: true, restart count 3 Nov 25 19:08:43.262: INFO: Container csi-provisioner ready: true, restart count 3 Nov 25 19:08:43.262: INFO: Container csi-resizer ready: true, restart count 3 Nov 25 19:08:43.262: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 25 19:08:43.262: INFO: Container hostpath ready: true, restart count 3 Nov 25 19:08:43.262: INFO: Container liveness-probe ready: true, restart count 3 Nov 25 19:08:43.262: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 25 19:08:43.262: INFO: konnectivity-agent-26n2n started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container konnectivity-agent ready: false, restart count 5 Nov 25 19:08:43.262: INFO: pod-configmaps-cff0010c-8d2d-4981-8991-10714a4dd75e started at 2022-11-25 18:58:21 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 19:08:43.262: INFO: hostexec-bootstrap-e2e-minion-group-p8wv-md6sx started at 2022-11-25 19:08:35 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.262: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 19:08:43.664: INFO: Latency metrics for node bootstrap-e2e-minion-group-p8wv Nov 25 19:08:43.664: INFO: Logging node info for node bootstrap-e2e-minion-group-rvwg Nov 25 19:08:43.756: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-rvwg d72b04ed-8c3e-4237-a1a1-842914101de6 8858 0 2022-11-25 18:55:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-rvwg kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-rvwg topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volumeio-2766":"bootstrap-e2e-minion-group-rvwg","csi-hostpath-volumemode-1875":"bootstrap-e2e-minion-group-rvwg","csi-hostpath-volumemode-870":"bootstrap-e2e-minion-group-rvwg","csi-mock-csi-mock-volumes-1647":"csi-mock-csi-mock-volumes-1647"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 19:05:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 19:08:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 19:08:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-minion-group-rvwg,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:05:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:05:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:05:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:05:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:05:33 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:05:33 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:05:33 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:05:33 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.2.247,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-rvwg.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-rvwg.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dbf5cb99be0ec068c5cd2f1643938098,SystemUUID:dbf5cb99-be0e-c068-c5cd-2f1643938098,BootID:e2f727ad-5c6c-4e26-854f-4f7e80c2c71f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:08:43.756: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-rvwg Nov 25 19:08:43.820: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-rvwg Nov 25 19:08:43.950: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:03:40 +0000 UTC (0+7 container statuses recorded) Nov 25 19:08:43.950: INFO: Container csi-attacher ready: true, restart count 4 Nov 25 19:08:43.950: INFO: Container csi-provisioner ready: true, restart count 4 Nov 25 19:08:43.950: INFO: Container csi-resizer ready: true, restart count 4 Nov 25 19:08:43.950: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 25 19:08:43.950: INFO: Container hostpath ready: true, restart count 4 Nov 25 19:08:43.950: INFO: Container liveness-probe ready: true, restart count 4 Nov 25 19:08:43.950: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 25 19:08:43.950: INFO: netserver-2 started at 2022-11-25 19:05:50 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container webserver ready: true, restart count 3 Nov 25 19:08:43.950: INFO: l7-default-backend-8549d69d99-vsr6h started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container default-http-backend ready: true, restart count 0 Nov 25 19:08:43.950: INFO: netserver-2 started at 2022-11-25 18:59:13 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container webserver ready: false, restart count 5 Nov 25 19:08:43.950: INFO: pod-b613c2de-9165-4dc0-b03e-fba4980e2ba0 started at 2022-11-25 18:59:47 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:08:43.950: INFO: netserver-2 started at 2022-11-25 19:03:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container webserver ready: false, restart count 3 Nov 25 19:08:43.950: INFO: netserver-2 started at 2022-11-25 19:05:34 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container webserver ready: true, restart count 2 Nov 25 19:08:43.950: INFO: hostexec-bootstrap-e2e-minion-group-rvwg-x62bn started at 2022-11-25 18:58:59 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 19:08:43.950: INFO: netserver-2 started at 2022-11-25 19:07:53 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container webserver ready: true, restart count 1 Nov 25 19:08:43.950: INFO: kube-dns-autoscaler-5f6455f985-5hbzc started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container autoscaler ready: false, restart count 5 Nov 25 19:08:43.950: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:58:42 +0000 UTC (0+7 container statuses recorded) Nov 25 19:08:43.950: INFO: Container csi-attacher ready: true, restart count 2 Nov 25 19:08:43.950: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 19:08:43.950: INFO: Container csi-resizer ready: true, restart count 2 Nov 25 19:08:43.950: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 25 19:08:43.950: INFO: Container hostpath ready: true, restart count 2 Nov 25 19:08:43.950: INFO: Container liveness-probe ready: true, restart count 2 Nov 25 19:08:43.950: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 25 19:08:43.950: INFO: metadata-proxy-v0.1-szbqx started at 2022-11-25 18:55:37 +0000 UTC (0+2 container statuses recorded) Nov 25 19:08:43.950: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:08:43.950: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:08:43.950: INFO: coredns-6d97d5ddb-l2w5l started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container coredns ready: false, restart count 6 Nov 25 19:08:43.950: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:07:57 +0000 UTC (0+7 container statuses recorded) Nov 25 19:08:43.950: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 19:08:43.950: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 19:08:43.950: INFO: Container csi-resizer ready: true, restart count 0 Nov 25 19:08:43.950: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 25 19:08:43.950: INFO: Container hostpath ready: true, restart count 0 Nov 25 19:08:43.950: INFO: Container liveness-probe ready: true, restart count 0 Nov 25 19:08:43.950: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 25 19:08:43.950: INFO: hostexec-bootstrap-e2e-minion-group-rvwg-fkmcp started at 2022-11-25 18:59:11 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container agnhost-container ready: true, restart count 4 Nov 25 19:08:43.950: INFO: pod-72691924-c01b-4050-9991-a15a20879782 started at 2022-11-25 18:59:17 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:08:43.950: INFO: csi-mockplugin-0 started at 2022-11-25 19:08:38 +0000 UTC (0+4 container statuses recorded) Nov 25 19:08:43.950: INFO: Container busybox ready: true, restart count 0 Nov 25 19:08:43.950: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 19:08:43.950: INFO: Container driver-registrar ready: true, restart count 0 Nov 25 19:08:43.950: INFO: Container mock ready: true, restart count 0 Nov 25 19:08:43.950: INFO: kube-proxy-bootstrap-e2e-minion-group-rvwg started at 2022-11-25 18:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container kube-proxy ready: true, restart count 6 Nov 25 19:08:43.950: INFO: volume-snapshot-controller-0 started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container volume-snapshot-controller ready: true, restart count 5 Nov 25 19:08:43.950: INFO: konnectivity-agent-9br57 started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container konnectivity-agent ready: false, restart count 6 Nov 25 19:08:43.950: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:03:42 +0000 UTC (0+7 container statuses recorded) Nov 25 19:08:43.950: INFO: Container csi-attacher ready: false, restart count 5 Nov 25 19:08:43.950: INFO: Container csi-provisioner ready: false, restart count 5 Nov 25 19:08:43.950: INFO: Container csi-resizer ready: false, restart count 5 Nov 25 19:08:43.950: INFO: Container csi-snapshotter ready: false, restart count 5 Nov 25 19:08:43.950: INFO: Container hostpath ready: false, restart count 5 Nov 25 19:08:43.950: INFO: Container liveness-probe ready: false, restart count 5 Nov 25 19:08:43.950: INFO: Container node-driver-registrar ready: false, restart count 5 Nov 25 19:08:43.950: INFO: pod-a42d737b-fb10-4b6c-b071-ba961690b9ce started at 2022-11-25 18:59:38 +0000 UTC (0+1 container statuses recorded) Nov 25 19:08:43.950: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:08:44.287: INFO: Latency metrics for node bootstrap-e2e-minion-group-rvwg [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-2809" for this suite. 11/25/22 19:08:44.287
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=LoadBalancer$'
test/e2e/framework/network/utils.go:834 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000a64700, 0x3c?) test/e2e/framework/network/utils.go:834 +0x545 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001006000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 +0x10a k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 +0x37ffrom junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 19:02:24.88 Nov 25 19:02:24.880: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 19:02:24.882 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 19:02:25.092 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 19:02:25.176 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work for type=LoadBalancer test/e2e/network/loadbalancer.go:1266 STEP: creating a service esipp-9295/external-local-lb with type=LoadBalancer 11/25/22 19:02:25.542 STEP: setting ExternalTrafficPolicy=Local 11/25/22 19:02:25.542 STEP: waiting for loadbalancer for service esipp-9295/external-local-lb 11/25/22 19:02:25.65 Nov 25 19:02:25.650: INFO: Waiting up to 15m0s for service "external-local-lb" to have a LoadBalancer STEP: creating a pod to be part of the service external-local-lb 11/25/22 19:04:25.785 Nov 25 19:04:25.846: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 19:04:25.910: INFO: Found 0/1 pods - will retry Nov 25 19:04:27.965: INFO: Found all 1 pods Nov 25 19:04:27.965: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-lb-rfz4b] Nov 25 19:04:27.965: INFO: Waiting up to 2m0s for pod "external-local-lb-rfz4b" in namespace "esipp-9295" to be "running and ready" Nov 25 19:04:28.019: INFO: Pod "external-local-lb-rfz4b": Phase="Pending", Reason="", readiness=false. Elapsed: 54.482456ms Nov 25 19:04:28.019: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-rfz4b' on 'bootstrap-e2e-minion-group-ft5h' to be 'Running' but was 'Pending' Nov 25 19:04:30.087: INFO: Pod "external-local-lb-rfz4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122656512s Nov 25 19:04:30.087: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-rfz4b' on 'bootstrap-e2e-minion-group-ft5h' to be 'Running' but was 'Pending' Nov 25 19:04:32.090: INFO: Pod "external-local-lb-rfz4b": Phase="Running", Reason="", readiness=false. Elapsed: 4.125443079s Nov 25 19:04:32.090: INFO: Error evaluating pod condition running and ready: pod 'external-local-lb-rfz4b' on 'bootstrap-e2e-minion-group-ft5h' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC }] Nov 25 19:04:34.069: INFO: Pod "external-local-lb-rfz4b": Phase="Running", Reason="", readiness=false. Elapsed: 6.104384197s Nov 25 19:04:34.069: INFO: Error evaluating pod condition running and ready: pod 'external-local-lb-rfz4b' on 'bootstrap-e2e-minion-group-ft5h' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC }] Nov 25 19:04:36.087: INFO: Pod "external-local-lb-rfz4b": Phase="Running", Reason="", readiness=false. Elapsed: 8.122156971s Nov 25 19:04:36.087: INFO: Error evaluating pod condition running and ready: pod 'external-local-lb-rfz4b' on 'bootstrap-e2e-minion-group-ft5h' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC }] Nov 25 19:04:38.082: INFO: Pod "external-local-lb-rfz4b": Phase="Running", Reason="", readiness=false. Elapsed: 10.117336579s Nov 25 19:04:38.082: INFO: Error evaluating pod condition running and ready: pod 'external-local-lb-rfz4b' on 'bootstrap-e2e-minion-group-ft5h' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC }] Nov 25 19:04:40.140: INFO: Pod "external-local-lb-rfz4b": Phase="Running", Reason="", readiness=false. Elapsed: 12.175446252s Nov 25 19:04:40.140: INFO: Error evaluating pod condition running and ready: pod 'external-local-lb-rfz4b' on 'bootstrap-e2e-minion-group-ft5h' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC }] Nov 25 19:04:42.076: INFO: Pod "external-local-lb-rfz4b": Phase="Running", Reason="", readiness=false. Elapsed: 14.1116838s Nov 25 19:04:42.076: INFO: Error evaluating pod condition running and ready: pod 'external-local-lb-rfz4b' on 'bootstrap-e2e-minion-group-ft5h' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC }] Nov 25 19:04:44.085: INFO: Pod "external-local-lb-rfz4b": Phase="Running", Reason="", readiness=false. Elapsed: 16.12037836s Nov 25 19:04:44.085: INFO: Error evaluating pod condition running and ready: pod 'external-local-lb-rfz4b' on 'bootstrap-e2e-minion-group-ft5h' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC }] Nov 25 19:04:46.121: INFO: Pod "external-local-lb-rfz4b": Phase="Running", Reason="", readiness=false. Elapsed: 18.156683576s Nov 25 19:04:46.121: INFO: Error evaluating pod condition running and ready: pod 'external-local-lb-rfz4b' on 'bootstrap-e2e-minion-group-ft5h' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC }] Nov 25 19:04:48.069: INFO: Pod "external-local-lb-rfz4b": Phase="Running", Reason="", readiness=false. Elapsed: 20.10450746s Nov 25 19:04:48.069: INFO: Error evaluating pod condition running and ready: pod 'external-local-lb-rfz4b' on 'bootstrap-e2e-minion-group-ft5h' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:31 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC }] Nov 25 19:04:50.081: INFO: Pod "external-local-lb-rfz4b": Phase="Running", Reason="", readiness=true. Elapsed: 22.116797482s Nov 25 19:04:50.081: INFO: Pod "external-local-lb-rfz4b" satisfied condition "running and ready" Nov 25 19:04:50.082: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-lb-rfz4b] STEP: waiting for loadbalancer for service esipp-9295/external-local-lb 11/25/22 19:04:50.082 Nov 25 19:04:50.082: INFO: Waiting up to 15m0s for service "external-local-lb" to have a LoadBalancer STEP: reading clientIP using the TCP service's service port via its external VIP 11/25/22 19:04:50.148 Nov 25 19:04:50.148: INFO: Poking "http://35.233.169.52:80/clientip" Nov 25 19:05:00.148: INFO: Poke("http://35.233.169.52:80/clientip"): Get "http://35.233.169.52:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:05:02.151: INFO: Poking "http://35.233.169.52:80/clientip" Nov 25 19:05:12.154: INFO: Poke("http://35.233.169.52:80/clientip"): Get "http://35.233.169.52:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:05:14.149: INFO: Poking "http://35.233.169.52:80/clientip" Nov 25 19:05:24.150: INFO: Poke("http://35.233.169.52:80/clientip"): Get "http://35.233.169.52:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:05:26.149: INFO: Poking "http://35.233.169.52:80/clientip" Nov 25 19:05:36.149: INFO: Poke("http://35.233.169.52:80/clientip"): Get "http://35.233.169.52:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:05:38.149: INFO: Poking "http://35.233.169.52:80/clientip" Nov 25 19:05:38.230: INFO: Poke("http://35.233.169.52:80/clientip"): success Nov 25 19:05:38.230: INFO: ClientIP detected by target pod using VIP:SvcPort is 34.135.0.117:54004 STEP: checking if Source IP is preserved 11/25/22 19:05:38.23 Nov 25 19:05:38.410: INFO: Waiting up to 15m0s for service "external-local-lb" to have no LoadBalancer STEP: Performing setup for networking test in namespace esipp-9295 11/25/22 19:05:49.805 STEP: creating a selector 11/25/22 19:05:49.805 STEP: Creating the service pods in kubernetes 11/25/22 19:05:49.805 Nov 25 19:05:49.805: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 25 19:05:50.306: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-9295" to be "running and ready" Nov 25 19:05:50.392: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 86.189281ms Nov 25 19:05:50.392: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 19:05:52.462: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156240335s Nov 25 19:05:52.462: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 19:06:31.740: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 41.433614393s Nov 25 19:06:31.740: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 19:06:32.434: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 42.128183813s Nov 25 19:06:32.434: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 19:06:34.434: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 44.128194528s Nov 25 19:06:34.434: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 19:06:36.435: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.129419482s Nov 25 19:06:36.435: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:06:38.437: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.130711087s Nov 25 19:06:38.437: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:06:40.435: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.129493278s Nov 25 19:06:40.436: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:06:42.435: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.129068877s Nov 25 19:06:42.435: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:06:44.463: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 54.156865336s Nov 25 19:06:44.463: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 25 19:06:44.463: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 25 19:06:44.505: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-9295" to be "running and ready" Nov 25 19:06:44.560: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 55.215221ms Nov 25 19:06:44.560: INFO: The phase of Pod netserver-1 is Running (Ready = true) Nov 25 19:06:44.560: INFO: Pod "netserver-1" satisfied condition "running and ready" Nov 25 19:06:44.605: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "esipp-9295" to be "running and ready" Nov 25 19:06:44.649: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 43.467552ms Nov 25 19:06:44.649: INFO: The phase of Pod netserver-2 is Running (Ready = true) Nov 25 19:06:44.649: INFO: Pod "netserver-2" satisfied condition "running and ready" STEP: Creating test pods 11/25/22 19:06:44.693 Nov 25 19:06:44.744: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "esipp-9295" to be "running" Nov 25 19:06:44.786: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 41.708095ms Nov 25 19:06:46.828: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083942247s Nov 25 19:06:48.827: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082791763s Nov 25 19:06:50.828: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083820532s Nov 25 19:06:52.828: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083975643s Nov 25 19:06:54.828: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 10.083395018s Nov 25 19:06:56.828: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 12.084156898s Nov 25 19:06:58.829: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 14.084867139s Nov 25 19:07:00.828: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 16.083998183s Nov 25 19:07:02.828: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 18.084343467s Nov 25 19:07:04.828: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 20.083827631s Nov 25 19:07:06.829: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 22.084497217s Nov 25 19:07:08.828: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 24.084145626s Nov 25 19:07:10.828: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 26.084170669s Nov 25 19:07:10.828: INFO: Pod "test-container-pod" satisfied condition "running" Nov 25 19:07:10.870: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 STEP: Getting node addresses 11/25/22 19:07:10.87 Nov 25 19:07:10.870: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes 11/25/22 19:07:10.964 Nov 25 19:07:11.055: INFO: Service node-port-service in namespace esipp-9295 found. Nov 25 19:07:11.186: INFO: Service session-affinity-service in namespace esipp-9295 found. STEP: Waiting for NodePort service to expose endpoint 11/25/22 19:07:11.228 Nov 25 19:07:12.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:13.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:14.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:15.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:16.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:17.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:18.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:19.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:20.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:21.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:22.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:23.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:24.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:25.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 5m0.664s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:1266 At [By Step] Waiting for NodePort service to expose endpoint (Step Runtime: 14.316s) test/e2e/framework/network/utils.go:832 Spec Goroutine goroutine 1149 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc003e31a40, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x8?, 0x2fd9d05?, 0x38?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0x754e980?, 0xc001169b58?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x0?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework.WaitForServiceEndpointsNum({0x801de88?, 0xc00343e820}, {0xc003e23a10, 0xa}, {0x75ee1b4, 0x11}, 0x3, 0x0?, 0x7fd47149aa68?) test/e2e/framework/util.go:424 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000a64700, 0x3c?) test/e2e/framework/network/utils.go:833 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001006000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000c1c000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:07:26.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:27.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:28.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:29.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:30.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:31.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:32.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:33.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:34.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:35.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:36.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:37.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:38.229: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:39.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:40.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:41.228: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:41.269: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:07:41.310: INFO: Unexpected error: failed to validate endpoints for service node-port-service in namespace: esipp-9295: <*errors.errorString | 0xc0001c99e0>: { s: "timed out waiting for the condition", } Nov 25 19:07:41.310: FAIL: failed to validate endpoints for service node-port-service in namespace: esipp-9295: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000a64700, 0x3c?) test/e2e/framework/network/utils.go:834 +0x545 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001006000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 +0x10a k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 +0x37f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 19:07:41.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 19:07:41.362: INFO: Output of kubectl describe svc: Nov 25 19:07:41.362: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=esipp-9295 describe svc --namespace=esipp-9295' Nov 25 19:07:41.975: INFO: stderr: "" Nov 25 19:07:41.975: INFO: stdout: "Name: external-local-lb\nNamespace: esipp-9295\nLabels: testid=external-local-lb-82e21db1-d870-4a5b-b06c-aa79e13c59ad\nAnnotations: <none>\nSelector: testid=external-local-lb-82e21db1-d870-4a5b-b06c-aa79e13c59ad\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.199.50\nIPs: 10.0.199.50\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nEndpoints: 10.64.1.117:80\nSession Affinity: None\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 3m53s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 3m16s service-controller Ensured load balancer\n Normal Type 2m3s service-controller LoadBalancer -> ClusterIP\n\n\nName: node-port-service\nNamespace: esipp-9295\nLabels: <none>\nAnnotations: <none>\nSelector: selector-2751106c-00fb-4b71-8e52-36217cabd426=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.24.223\nIPs: 10.0.24.223\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 31103/TCP\nEndpoints: <none>\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 30383/UDP\nEndpoints: <none>\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents: <none>\n\n\nName: session-affinity-service\nNamespace: esipp-9295\nLabels: <none>\nAnnotations: <none>\nSelector: selector-2751106c-00fb-4b71-8e52-36217cabd426=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.107.232\nIPs: 10.0.107.232\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 31901/TCP\nEndpoints: <none>\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 30319/UDP\nEndpoints: <none>\nSession Affinity: ClientIP\nExternal Traffic Policy: Cluster\nEvents: <none>\n" Nov 25 19:07:41.975: INFO: Name: external-local-lb Namespace: esipp-9295 Labels: testid=external-local-lb-82e21db1-d870-4a5b-b06c-aa79e13c59ad Annotations: <none> Selector: testid=external-local-lb-82e21db1-d870-4a5b-b06c-aa79e13c59ad Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.199.50 IPs: 10.0.199.50 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.64.1.117:80 Session Affinity: None Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 3m53s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 3m16s service-controller Ensured load balancer Normal Type 2m3s service-controller LoadBalancer -> ClusterIP Name: node-port-service Namespace: esipp-9295 Labels: <none> Annotations: <none> Selector: selector-2751106c-00fb-4b71-8e52-36217cabd426=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.24.223 IPs: 10.0.24.223 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 31103/TCP Endpoints: <none> Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 30383/UDP Endpoints: <none> Session Affinity: None External Traffic Policy: Cluster Events: <none> Name: session-affinity-service Namespace: esipp-9295 Labels: <none> Annotations: <none> Selector: selector-2751106c-00fb-4b71-8e52-36217cabd426=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.107.232 IPs: 10.0.107.232 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 31901/TCP Endpoints: <none> Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 30319/UDP Endpoints: <none> Session Affinity: ClientIP External Traffic Policy: Cluster Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 19:07:41.975 STEP: Collecting events from namespace "esipp-9295". 11/25/22 19:07:41.975 STEP: Found 38 events. 11/25/22 19:07:42.021 Nov 25 19:07:42.021: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for external-local-lb-rfz4b: { } Scheduled: Successfully assigned esipp-9295/external-local-lb-rfz4b to bootstrap-e2e-minion-group-ft5h Nov 25 19:07:42.021: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-0: { } Scheduled: Successfully assigned esipp-9295/netserver-0 to bootstrap-e2e-minion-group-ft5h Nov 25 19:07:42.021: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-1: { } Scheduled: Successfully assigned esipp-9295/netserver-1 to bootstrap-e2e-minion-group-p8wv Nov 25 19:07:42.021: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-2: { } Scheduled: Successfully assigned esipp-9295/netserver-2 to bootstrap-e2e-minion-group-rvwg Nov 25 19:07:42.021: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-container-pod: { } Scheduled: Successfully assigned esipp-9295/test-container-pod to bootstrap-e2e-minion-group-ft5h Nov 25 19:07:42.021: INFO: At 2022-11-25 19:03:48 +0000 UTC - event for external-local-lb: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 25 19:07:42.021: INFO: At 2022-11-25 19:04:25 +0000 UTC - event for external-local-lb: {replication-controller } SuccessfulCreate: Created pod: external-local-lb-rfz4b Nov 25 19:07:42.021: INFO: At 2022-11-25 19:04:25 +0000 UTC - event for external-local-lb: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 25 19:07:42.021: INFO: At 2022-11-25 19:04:27 +0000 UTC - event for external-local-lb-rfz4b: {kubelet bootstrap-e2e-minion-group-ft5h} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:07:42.021: INFO: At 2022-11-25 19:04:27 +0000 UTC - event for external-local-lb-rfz4b: {kubelet bootstrap-e2e-minion-group-ft5h} Created: Created container netexec Nov 25 19:07:42.021: INFO: At 2022-11-25 19:04:28 +0000 UTC - event for external-local-lb-rfz4b: {kubelet bootstrap-e2e-minion-group-ft5h} Unhealthy: Readiness probe failed: Get "http://10.64.1.110:80/hostName": EOF Nov 25 19:07:42.021: INFO: At 2022-11-25 19:04:28 +0000 UTC - event for external-local-lb-rfz4b: {kubelet bootstrap-e2e-minion-group-ft5h} Killing: Stopping container netexec Nov 25 19:07:42.021: INFO: At 2022-11-25 19:04:28 +0000 UTC - event for external-local-lb-rfz4b: {kubelet bootstrap-e2e-minion-group-ft5h} Started: Started container netexec Nov 25 19:07:42.021: INFO: At 2022-11-25 19:04:29 +0000 UTC - event for external-local-lb-rfz4b: {kubelet bootstrap-e2e-minion-group-ft5h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:07:42.021: INFO: At 2022-11-25 19:04:32 +0000 UTC - event for external-local-lb-rfz4b: {kubelet bootstrap-e2e-minion-group-ft5h} BackOff: Back-off restarting failed container netexec in pod external-local-lb-rfz4b_esipp-9295(7d273578-1ffe-440f-b4b9-4b133c1c123e) Nov 25 19:07:42.021: INFO: At 2022-11-25 19:05:38 +0000 UTC - event for external-local-lb: {service-controller } Type: LoadBalancer -> ClusterIP Nov 25 19:07:42.021: INFO: At 2022-11-25 19:05:51 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:07:42.021: INFO: At 2022-11-25 19:05:51 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} Created: Created container webserver Nov 25 19:07:42.021: INFO: At 2022-11-25 19:05:51 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} Started: Started container webserver Nov 25 19:07:42.021: INFO: At 2022-11-25 19:05:51 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-p8wv} Started: Started container webserver Nov 25 19:07:42.021: INFO: At 2022-11-25 19:05:51 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-p8wv} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:07:42.021: INFO: At 2022-11-25 19:05:51 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-p8wv} Created: Created container webserver Nov 25 19:07:42.021: INFO: At 2022-11-25 19:05:51 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-mvnr4" : failed to sync configmap cache: timed out waiting for the condition Nov 25 19:07:42.021: INFO: At 2022-11-25 19:05:52 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:07:42.021: INFO: At 2022-11-25 19:05:52 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} Started: Started container webserver Nov 25 19:07:42.021: INFO: At 2022-11-25 19:05:52 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} Created: Created container webserver Nov 25 19:07:42.021: INFO: At 2022-11-25 19:05:53 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} Killing: Stopping container webserver Nov 25 19:07:42.021: INFO: At 2022-11-25 19:05:54 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:07:42.021: INFO: At 2022-11-25 19:05:57 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} BackOff: Back-off restarting failed container webserver in pod netserver-2_esipp-9295(fb0829d0-b93d-4c4f-b3ff-61e468370469) Nov 25 19:07:42.021: INFO: At 2022-11-25 19:06:01 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:07:42.021: INFO: At 2022-11-25 19:06:01 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} Killing: Stopping container webserver Nov 25 19:07:42.021: INFO: At 2022-11-25 19:06:05 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} BackOff: Back-off restarting failed container webserver in pod netserver-0_esipp-9295(9395304d-77c1-4de7-88ed-6ca1c11b8dd1) Nov 25 19:07:42.021: INFO: At 2022-11-25 19:07:10 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-ft5h} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:07:42.021: INFO: At 2022-11-25 19:07:10 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-ft5h} Created: Created container webserver Nov 25 19:07:42.021: INFO: At 2022-11-25 19:07:10 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-ft5h} Started: Started container webserver Nov 25 19:07:42.021: INFO: At 2022-11-25 19:07:22 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-p8wv} Killing: Stopping container webserver Nov 25 19:07:42.021: INFO: At 2022-11-25 19:07:23 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-p8wv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:07:42.021: INFO: At 2022-11-25 19:07:26 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-p8wv} BackOff: Back-off restarting failed container webserver in pod netserver-1_esipp-9295(033181fa-c502-4763-bf89-2f3f39a99769) Nov 25 19:07:42.066: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 19:07:42.066: INFO: external-local-lb-rfz4b bootstrap-e2e-minion-group-ft5h Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:04:25 +0000 UTC }] Nov 25 19:07:42.066: INFO: netserver-0 bootstrap-e2e-minion-group-ft5h Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:05:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:06:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:06:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:05:50 +0000 UTC }] Nov 25 19:07:42.066: INFO: netserver-1 bootstrap-e2e-minion-group-p8wv Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:05:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:07:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:05:50 +0000 UTC }] Nov 25 19:07:42.066: INFO: netserver-2 bootstrap-e2e-minion-group-rvwg Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:05:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:06:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:06:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:05:50 +0000 UTC }] Nov 25 19:07:42.066: INFO: test-container-pod bootstrap-e2e-minion-group-ft5h Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:07:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:07:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:07:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:07:08 +0000 UTC }] Nov 25 19:07:42.066: INFO: Nov 25 19:07:42.339: INFO: Logging node info for node bootstrap-e2e-master Nov 25 19:07:42.380: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 063993ed-280a-4252-9455-5251e2ae7f68 7934 0 2022-11-25 18:55:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 19:06:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:06:37 +0000 UTC,LastTransitionTime:2022-11-25 18:55:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:06:37 +0000 UTC,LastTransitionTime:2022-11-25 18:55:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:06:37 +0000 UTC,LastTransitionTime:2022-11-25 18:55:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:06:37 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:104.198.13.163,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:baec19a097c341ae8d14b5ee519a12bc,SystemUUID:baec19a0-97c3-41ae-8d14-b5ee519a12bc,BootID:cbb52bbc-4a45-4571-8271-7b01e70f9d0d,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:07:42.381: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 19:07:42.425: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 19:07:42.480: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.480: INFO: Container etcd-container ready: true, restart count 2 Nov 25 19:07:42.480: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.480: INFO: Container kube-controller-manager ready: true, restart count 5 Nov 25 19:07:42.480: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-25 18:55:06 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.480: INFO: Container l7-lb-controller ready: false, restart count 6 Nov 25 19:07:42.480: INFO: metadata-proxy-v0.1-sd5zx started at 2022-11-25 18:55:32 +0000 UTC (0+2 container statuses recorded) Nov 25 19:07:42.480: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:07:42.480: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:07:42.480: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.480: INFO: Container etcd-container ready: true, restart count 3 Nov 25 19:07:42.480: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.480: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 25 19:07:42.480: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.480: INFO: Container kube-apiserver ready: true, restart count 1 Nov 25 19:07:42.480: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.480: INFO: Container kube-scheduler ready: true, restart count 4 Nov 25 19:07:42.480: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-25 18:55:06 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.480: INFO: Container kube-addon-manager ready: true, restart count 1 Nov 25 19:07:42.660: INFO: Latency metrics for node bootstrap-e2e-master Nov 25 19:07:42.660: INFO: Logging node info for node bootstrap-e2e-minion-group-ft5h Nov 25 19:07:42.704: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ft5h f6d0c520-a72a-4938-9464-c37052e3eead 8079 0 2022-11-25 18:55:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ft5h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-ft5h topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-5792":"bootstrap-e2e-minion-group-ft5h","csi-hostpath-provisioning-4022":"bootstrap-e2e-minion-group-ft5h","csi-hostpath-provisioning-8147":"bootstrap-e2e-minion-group-ft5h","csi-mock-csi-mock-volumes-4396":"csi-mock-csi-mock-volumes-4396","csi-mock-csi-mock-volumes-4834":"bootstrap-e2e-minion-group-ft5h","csi-mock-csi-mock-volumes-9297":"bootstrap-e2e-minion-group-ft5h","csi-mock-csi-mock-volumes-9449":"csi-mock-csi-mock-volumes-9449"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 19:05:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 19:05:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 19:07:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-minion-group-ft5h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:07:11 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:07:11 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:07:11 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:07:11 +0000 UTC,LastTransitionTime:2022-11-25 18:55:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.110.94,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ft5h.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ft5h.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b840d0c08d09e2ff89c00115dd74e373,SystemUUID:b840d0c0-8d09-e2ff-89c0-0115dd74e373,BootID:2c546fd1-5b2e-4c92-9b03-025eb8882457,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-8147^25a6945e-6cf4-11ed-85c7-f6a638f4faa2],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-8147^25a6945e-6cf4-11ed-85c7-f6a638f4faa2,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4022^2a742419-6cf4-11ed-bda8-1e2389fc6c86,DevicePath:,},},Config:nil,},} Nov 25 19:07:42.704: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ft5h Nov 25 19:07:42.749: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ft5h Nov 25 19:07:42.844: INFO: emptydir-io-client started at 2022-11-25 18:59:40 +0000 UTC (1+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Init container emptydir-io-init ready: true, restart count 0 Nov 25 19:07:42.844: INFO: Container emptydir-io-client ready: false, restart count 0 Nov 25 19:07:42.844: INFO: hostpath-symlink-prep-provisioning-1956 started at 2022-11-25 18:59:47 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container init-volume-provisioning-1956 ready: false, restart count 0 Nov 25 19:07:42.844: INFO: hostexec-bootstrap-e2e-minion-group-ft5h-t5gqq started at 2022-11-25 19:05:34 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 19:07:42.844: INFO: netserver-0 started at 2022-11-25 19:05:34 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container webserver ready: true, restart count 0 Nov 25 19:07:42.844: INFO: test-container-pod started at 2022-11-25 19:07:08 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container webserver ready: true, restart count 0 Nov 25 19:07:42.844: INFO: host-test-container-pod started at 2022-11-25 19:07:08 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 19:07:42.844: INFO: external-provisioner-95b75 started at 2022-11-25 19:05:04 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container nfs-provisioner ready: true, restart count 2 Nov 25 19:07:42.844: INFO: csi-mockplugin-0 started at 2022-11-25 19:02:58 +0000 UTC (0+3 container statuses recorded) Nov 25 19:07:42.844: INFO: Container csi-provisioner ready: true, restart count 3 Nov 25 19:07:42.844: INFO: Container driver-registrar ready: true, restart count 3 Nov 25 19:07:42.844: INFO: Container mock ready: true, restart count 3 Nov 25 19:07:42.844: INFO: external-local-nodes-98744 started at 2022-11-25 19:05:23 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container netexec ready: true, restart count 2 Nov 25 19:07:42.844: INFO: test-container-pod started at 2022-11-25 19:07:08 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container webserver ready: true, restart count 0 Nov 25 19:07:42.844: INFO: netserver-0 started at 2022-11-25 19:03:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container webserver ready: true, restart count 1 Nov 25 19:07:42.844: INFO: test-hostpath-type-7djl8 started at 2022-11-25 18:59:38 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 19:07:42.844: INFO: affinity-lb-esipp-transition-5rfqz started at 2022-11-25 19:05:26 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container affinity-lb-esipp-transition ready: true, restart count 0 Nov 25 19:07:42.844: INFO: pod-subpath-test-dynamicpv-nplw started at 2022-11-25 19:05:39 +0000 UTC (1+2 container statuses recorded) Nov 25 19:07:42.844: INFO: Init container init-volume-dynamicpv-nplw ready: true, restart count 0 Nov 25 19:07:42.844: INFO: Container test-container-subpath-dynamicpv-nplw ready: true, restart count 3 Nov 25 19:07:42.844: INFO: Container test-container-volume-dynamicpv-nplw ready: true, restart count 0 Nov 25 19:07:42.844: INFO: hostexec-bootstrap-e2e-minion-group-ft5h-6jwrb started at 2022-11-25 19:05:44 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 19:07:42.844: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:05:32 +0000 UTC (0+7 container statuses recorded) Nov 25 19:07:42.844: INFO: Container csi-attacher ready: true, restart count 1 Nov 25 19:07:42.844: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 19:07:42.844: INFO: Container csi-resizer ready: true, restart count 1 Nov 25 19:07:42.844: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 25 19:07:42.844: INFO: Container hostpath ready: true, restart count 1 Nov 25 19:07:42.844: INFO: Container liveness-probe ready: true, restart count 1 Nov 25 19:07:42.844: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 25 19:07:42.844: INFO: external-local-lb-rfz4b started at 2022-11-25 19:04:25 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container netexec ready: true, restart count 2 Nov 25 19:07:42.844: INFO: csi-mockplugin-0 started at 2022-11-25 19:04:29 +0000 UTC (0+3 container statuses recorded) Nov 25 19:07:42.844: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 19:07:42.844: INFO: Container driver-registrar ready: true, restart count 1 Nov 25 19:07:42.844: INFO: Container mock ready: true, restart count 1 Nov 25 19:07:42.844: INFO: csi-mockplugin-attacher-0 started at 2022-11-25 19:04:29 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container csi-attacher ready: false, restart count 2 Nov 25 19:07:42.844: INFO: forbid-27823384-rmpz4 started at 2022-11-25 19:04:00 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container c ready: true, restart count 0 Nov 25 19:07:42.844: INFO: metrics-server-v0.5.2-867b8754b9-565bg started at 2022-11-25 18:57:46 +0000 UTC (0+2 container statuses recorded) Nov 25 19:07:42.844: INFO: Container metrics-server ready: false, restart count 5 Nov 25 19:07:42.844: INFO: Container metrics-server-nanny ready: false, restart count 6 Nov 25 19:07:42.844: INFO: pvc-volume-tester-8ldh9 started at 2022-11-25 19:03:06 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container volume-tester ready: false, restart count 0 Nov 25 19:07:42.844: INFO: inclusterclient started at 2022-11-25 19:03:46 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container inclusterclient ready: false, restart count 0 Nov 25 19:07:42.844: INFO: kube-proxy-bootstrap-e2e-minion-group-ft5h started at 2022-11-25 18:55:34 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container kube-proxy ready: true, restart count 5 Nov 25 19:07:42.844: INFO: metadata-proxy-v0.1-9vhzj started at 2022-11-25 18:55:34 +0000 UTC (0+2 container statuses recorded) Nov 25 19:07:42.844: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:07:42.844: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:07:42.844: INFO: netserver-0 started at 2022-11-25 18:59:13 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container webserver ready: true, restart count 2 Nov 25 19:07:42.844: INFO: hostexec-bootstrap-e2e-minion-group-ft5h-rz6vn started at 2022-11-25 18:59:31 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 19:07:42.844: INFO: hostexec-bootstrap-e2e-minion-group-ft5h-ln9vd started at 2022-11-25 18:59:33 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.844: INFO: Container agnhost-container ready: true, restart count 3 Nov 25 19:07:42.844: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:05:43 +0000 UTC (0+7 container statuses recorded) Nov 25 19:07:42.844: INFO: Container csi-attacher ready: true, restart count 2 Nov 25 19:07:42.844: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 19:07:42.845: INFO: Container csi-resizer ready: true, restart count 2 Nov 25 19:07:42.845: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 25 19:07:42.845: INFO: Container hostpath ready: true, restart count 2 Nov 25 19:07:42.845: INFO: Container liveness-probe ready: true, restart count 2 Nov 25 19:07:42.845: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 25 19:07:42.845: INFO: konnectivity-agent-qf52c started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.845: INFO: Container konnectivity-agent ready: true, restart count 5 Nov 25 19:07:42.845: INFO: pvc-volume-tester-c2f6h started at 2022-11-25 18:59:40 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.845: INFO: Container volume-tester ready: false, restart count 0 Nov 25 19:07:42.845: INFO: net-tiers-svc-pnq45 started at 2022-11-25 19:02:25 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.845: INFO: Container netexec ready: true, restart count 3 Nov 25 19:07:42.845: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:05:39 +0000 UTC (0+7 container statuses recorded) Nov 25 19:07:42.845: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 19:07:42.845: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 19:07:42.845: INFO: Container csi-resizer ready: true, restart count 0 Nov 25 19:07:42.845: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 25 19:07:42.845: INFO: Container hostpath ready: true, restart count 0 Nov 25 19:07:42.845: INFO: Container liveness-probe ready: true, restart count 0 Nov 25 19:07:42.845: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 25 19:07:42.845: INFO: csi-mockplugin-0 started at 2022-11-25 19:04:41 +0000 UTC (0+4 container statuses recorded) Nov 25 19:07:42.845: INFO: Container busybox ready: true, restart count 1 Nov 25 19:07:42.845: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 19:07:42.845: INFO: Container driver-registrar ready: true, restart count 1 Nov 25 19:07:42.845: INFO: Container mock ready: true, restart count 1 Nov 25 19:07:42.845: INFO: test-container-pod started at 2022-11-25 19:05:12 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.845: INFO: Container webserver ready: true, restart count 1 Nov 25 19:07:42.845: INFO: netserver-0 started at 2022-11-25 19:05:50 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.845: INFO: Container webserver ready: true, restart count 2 Nov 25 19:07:42.845: INFO: csi-mockplugin-0 started at 2022-11-25 18:58:23 +0000 UTC (0+4 container statuses recorded) Nov 25 19:07:42.845: INFO: Container busybox ready: true, restart count 1 Nov 25 19:07:42.845: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 19:07:42.845: INFO: Container driver-registrar ready: true, restart count 1 Nov 25 19:07:42.845: INFO: Container mock ready: true, restart count 1 Nov 25 19:07:42.845: INFO: back-off-cap started at 2022-11-25 19:04:55 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.845: INFO: Container back-off-cap ready: false, restart count 4 Nov 25 19:07:42.845: INFO: test-hostpath-type-mzwc9 started at 2022-11-25 19:05:16 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:42.845: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 19:07:43.184: INFO: Latency metrics for node bootstrap-e2e-minion-group-ft5h Nov 25 19:07:43.184: INFO: Logging node info for node bootstrap-e2e-minion-group-p8wv Nov 25 19:07:43.225: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-p8wv 881c9872-bf9e-40c3-a0e6-f3f276af90f5 7977 0 2022-11-25 18:55:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-p8wv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-p8wv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2992":"bootstrap-e2e-minion-group-p8wv","csi-hostpath-multivolume-6105":"bootstrap-e2e-minion-group-p8wv","csi-hostpath-multivolume-8134":"bootstrap-e2e-minion-group-p8wv","csi-hostpath-multivolume-8995":"bootstrap-e2e-minion-group-p8wv","csi-hostpath-provisioning-3853":"bootstrap-e2e-minion-group-p8wv"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 19:04:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-25 19:05:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 19:06:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-minion-group-p8wv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:05:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:05:17 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:05:17 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:05:17 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:05:17 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.198.109.246,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-p8wv.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-p8wv.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f8ac2f5b7ee9732248672e4b22a9ad9,SystemUUID:7f8ac2f5-b7ee-9732-2486-72e4b22a9ad9,BootID:ba8ee318-1295-42a5-a59a-f3bfc254bc58,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:07:43.226: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-p8wv Nov 25 19:07:43.270: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-p8wv Nov 25 19:07:43.331: INFO: netserver-1 started at 2022-11-25 19:05:34 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container webserver ready: true, restart count 1 Nov 25 19:07:43.331: INFO: hostexec-bootstrap-e2e-minion-group-p8wv-ksk87 started at 2022-11-25 18:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container agnhost-container ready: false, restart count 4 Nov 25 19:07:43.331: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:02:28 +0000 UTC (0+7 container statuses recorded) Nov 25 19:07:43.331: INFO: Container csi-attacher ready: true, restart count 1 Nov 25 19:07:43.331: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 19:07:43.331: INFO: Container csi-resizer ready: true, restart count 1 Nov 25 19:07:43.331: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 25 19:07:43.331: INFO: Container hostpath ready: true, restart count 1 Nov 25 19:07:43.331: INFO: Container liveness-probe ready: true, restart count 1 Nov 25 19:07:43.331: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 25 19:07:43.331: INFO: kube-proxy-bootstrap-e2e-minion-group-p8wv started at 2022-11-25 18:55:36 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container kube-proxy ready: false, restart count 5 Nov 25 19:07:43.331: INFO: volume-prep-provisioning-9094 started at 2022-11-25 18:59:47 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container init-volume-provisioning-9094 ready: false, restart count 0 Nov 25 19:07:43.331: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:03:53 +0000 UTC (0+7 container statuses recorded) Nov 25 19:07:43.331: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 19:07:43.331: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 19:07:43.331: INFO: Container csi-resizer ready: true, restart count 0 Nov 25 19:07:43.331: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 25 19:07:43.331: INFO: Container hostpath ready: true, restart count 0 Nov 25 19:07:43.331: INFO: Container liveness-probe ready: true, restart count 0 Nov 25 19:07:43.331: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 25 19:07:43.331: INFO: affinity-lb-esipp-transition-2f2q9 started at 2022-11-25 19:05:26 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container affinity-lb-esipp-transition ready: true, restart count 2 Nov 25 19:07:43.331: INFO: coredns-6d97d5ddb-wrz6b started at 2022-11-25 18:55:55 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container coredns ready: false, restart count 6 Nov 25 19:07:43.331: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:02:27 +0000 UTC (0+7 container statuses recorded) Nov 25 19:07:43.331: INFO: Container csi-attacher ready: true, restart count 2 Nov 25 19:07:43.331: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 19:07:43.331: INFO: Container csi-resizer ready: true, restart count 2 Nov 25 19:07:43.331: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 25 19:07:43.331: INFO: Container hostpath ready: true, restart count 2 Nov 25 19:07:43.331: INFO: Container liveness-probe ready: true, restart count 2 Nov 25 19:07:43.331: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 25 19:07:43.331: INFO: httpd started at 2022-11-25 18:58:21 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container httpd ready: false, restart count 6 Nov 25 19:07:43.331: INFO: netserver-1 started at 2022-11-25 19:03:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container webserver ready: true, restart count 3 Nov 25 19:07:43.331: INFO: pod-configmaps-7e353c13-950c-4423-9808-a6f4226d3913 started at 2022-11-25 18:59:11 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 19:07:43.331: INFO: netserver-1 started at 2022-11-25 18:59:13 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container webserver ready: false, restart count 6 Nov 25 19:07:43.331: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:59:20 +0000 UTC (0+7 container statuses recorded) Nov 25 19:07:43.331: INFO: Container csi-attacher ready: true, restart count 3 Nov 25 19:07:43.331: INFO: Container csi-provisioner ready: true, restart count 3 Nov 25 19:07:43.331: INFO: Container csi-resizer ready: true, restart count 3 Nov 25 19:07:43.331: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 25 19:07:43.331: INFO: Container hostpath ready: true, restart count 3 Nov 25 19:07:43.331: INFO: Container liveness-probe ready: true, restart count 3 Nov 25 19:07:43.331: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 25 19:07:43.331: INFO: konnectivity-agent-26n2n started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container konnectivity-agent ready: false, restart count 5 Nov 25 19:07:43.331: INFO: pod-configmaps-cff0010c-8d2d-4981-8991-10714a4dd75e started at 2022-11-25 18:58:21 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 19:07:43.331: INFO: metadata-proxy-v0.1-zw9qm started at 2022-11-25 18:55:37 +0000 UTC (0+2 container statuses recorded) Nov 25 19:07:43.331: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:07:43.331: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:07:43.331: INFO: hostexec-bootstrap-e2e-minion-group-p8wv-bds6n started at 2022-11-25 18:58:40 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 19:07:43.331: INFO: external-local-update-gqvff started at 2022-11-25 18:59:02 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container netexec ready: true, restart count 4 Nov 25 19:07:43.331: INFO: pod-8bbfb4d6-d93f-46c7-bf6b-1853fc9cc35b started at 2022-11-25 18:59:01 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:07:43.331: INFO: netserver-1 started at 2022-11-25 19:05:50 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.331: INFO: Container webserver ready: false, restart count 1 Nov 25 19:07:43.331: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:04:45 +0000 UTC (0+7 container statuses recorded) Nov 25 19:07:43.331: INFO: Container csi-attacher ready: true, restart count 1 Nov 25 19:07:43.331: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 19:07:43.331: INFO: Container csi-resizer ready: true, restart count 1 Nov 25 19:07:43.331: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 25 19:07:43.331: INFO: Container hostpath ready: true, restart count 1 Nov 25 19:07:43.331: INFO: Container liveness-probe ready: true, restart count 1 Nov 25 19:07:43.331: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 25 19:07:43.639: INFO: Latency metrics for node bootstrap-e2e-minion-group-p8wv Nov 25 19:07:43.639: INFO: Logging node info for node bootstrap-e2e-minion-group-rvwg Nov 25 19:07:43.680: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-rvwg d72b04ed-8c3e-4237-a1a1-842914101de6 7933 0 2022-11-25 18:55:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-rvwg kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-rvwg topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volumeio-2766":"bootstrap-e2e-minion-group-rvwg","csi-hostpath-volumemode-870":"bootstrap-e2e-minion-group-rvwg"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 19:04:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-25 19:05:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 19:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-minion-group-rvwg,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:05:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:05:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:05:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:05:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:05:42 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:05:33 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:05:33 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:05:33 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:05:33 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.2.247,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-rvwg.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-rvwg.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dbf5cb99be0ec068c5cd2f1643938098,SystemUUID:dbf5cb99-be0e-c068-c5cd-2f1643938098,BootID:e2f727ad-5c6c-4e26-854f-4f7e80c2c71f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:07:43.681: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-rvwg Nov 25 19:07:43.725: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-rvwg Nov 25 19:07:43.780: INFO: kube-dns-autoscaler-5f6455f985-5hbzc started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container autoscaler ready: false, restart count 5 Nov 25 19:07:43.780: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:58:42 +0000 UTC (0+7 container statuses recorded) Nov 25 19:07:43.780: INFO: Container csi-attacher ready: true, restart count 2 Nov 25 19:07:43.780: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 19:07:43.780: INFO: Container csi-resizer ready: true, restart count 2 Nov 25 19:07:43.780: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 25 19:07:43.780: INFO: Container hostpath ready: true, restart count 2 Nov 25 19:07:43.780: INFO: Container liveness-probe ready: true, restart count 2 Nov 25 19:07:43.780: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 25 19:07:43.780: INFO: metadata-proxy-v0.1-szbqx started at 2022-11-25 18:55:37 +0000 UTC (0+2 container statuses recorded) Nov 25 19:07:43.780: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:07:43.780: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:07:43.780: INFO: coredns-6d97d5ddb-l2w5l started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container coredns ready: false, restart count 5 Nov 25 19:07:43.780: INFO: hostexec-bootstrap-e2e-minion-group-rvwg-fkmcp started at 2022-11-25 18:59:11 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container agnhost-container ready: false, restart count 3 Nov 25 19:07:43.780: INFO: pod-72691924-c01b-4050-9991-a15a20879782 started at 2022-11-25 18:59:17 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:07:43.780: INFO: kube-proxy-bootstrap-e2e-minion-group-rvwg started at 2022-11-25 18:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container kube-proxy ready: true, restart count 6 Nov 25 19:07:43.780: INFO: external-provisioner-r2xbg started at 2022-11-25 19:05:41 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container nfs-provisioner ready: true, restart count 1 Nov 25 19:07:43.780: INFO: volume-snapshot-controller-0 started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container volume-snapshot-controller ready: true, restart count 5 Nov 25 19:07:43.780: INFO: konnectivity-agent-9br57 started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container konnectivity-agent ready: false, restart count 6 Nov 25 19:07:43.780: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:03:42 +0000 UTC (0+7 container statuses recorded) Nov 25 19:07:43.780: INFO: Container csi-attacher ready: false, restart count 4 Nov 25 19:07:43.780: INFO: Container csi-provisioner ready: false, restart count 4 Nov 25 19:07:43.780: INFO: Container csi-resizer ready: false, restart count 4 Nov 25 19:07:43.780: INFO: Container csi-snapshotter ready: false, restart count 4 Nov 25 19:07:43.780: INFO: Container hostpath ready: false, restart count 4 Nov 25 19:07:43.780: INFO: Container liveness-probe ready: false, restart count 4 Nov 25 19:07:43.780: INFO: Container node-driver-registrar ready: false, restart count 4 Nov 25 19:07:43.780: INFO: pod-a42d737b-fb10-4b6c-b071-ba961690b9ce started at 2022-11-25 18:59:38 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:07:43.780: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:03:40 +0000 UTC (0+7 container statuses recorded) Nov 25 19:07:43.780: INFO: Container csi-attacher ready: true, restart count 4 Nov 25 19:07:43.780: INFO: Container csi-provisioner ready: true, restart count 4 Nov 25 19:07:43.780: INFO: Container csi-resizer ready: true, restart count 4 Nov 25 19:07:43.780: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 25 19:07:43.780: INFO: Container hostpath ready: true, restart count 4 Nov 25 19:07:43.780: INFO: Container liveness-probe ready: true, restart count 4 Nov 25 19:07:43.780: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 25 19:07:43.780: INFO: l7-default-backend-8549d69d99-vsr6h started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container default-http-backend ready: true, restart count 0 Nov 25 19:07:43.780: INFO: netserver-2 started at 2022-11-25 18:59:13 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container webserver ready: false, restart count 5 Nov 25 19:07:43.780: INFO: pod-b613c2de-9165-4dc0-b03e-fba4980e2ba0 started at 2022-11-25 18:59:47 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:07:43.780: INFO: netserver-2 started at 2022-11-25 19:03:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container webserver ready: false, restart count 2 Nov 25 19:07:43.780: INFO: netserver-2 started at 2022-11-25 19:05:34 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container webserver ready: true, restart count 2 Nov 25 19:07:43.780: INFO: netserver-2 started at 2022-11-25 19:05:50 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container webserver ready: true, restart count 2 Nov 25 19:07:43.780: INFO: mutability-test-js7zw started at 2022-11-25 18:58:39 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container netexec ready: false, restart count 5 Nov 25 19:07:43.780: INFO: hostexec-bootstrap-e2e-minion-group-rvwg-x62bn started at 2022-11-25 18:58:59 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 19:07:43.780: INFO: affinity-lb-esipp-transition-7q5ht started at 2022-11-25 19:05:26 +0000 UTC (0+1 container statuses recorded) Nov 25 19:07:43.780: INFO: Container affinity-lb-esipp-transition ready: true, restart count 1 Nov 25 19:07:44.003: INFO: Latency metrics for node bootstrap-e2e-minion-group-rvwg [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-9295" for this suite. 11/25/22 19:07:44.003
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=NodePort$'
test/e2e/framework/network/utils.go:834 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc001082700, 0x3c?) test/e2e/framework/network/utils.go:834 +0x545 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001330000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:59:52.646 Nov 25 18:59:52.647: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 18:59:52.648 Nov 25 18:59:52.688: INFO: Unexpected error while creating namespace: Post "https://104.198.13.163/api/v1/namespaces": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 18:59:54.728: INFO: Unexpected error while creating namespace: Post "https://104.198.13.163/api/v1/namespaces": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 18:59:56.728: INFO: Unexpected error while creating namespace: Post "https://104.198.13.163/api/v1/namespaces": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 18:59:58.728: INFO: Unexpected error while creating namespace: Post "https://104.198.13.163/api/v1/namespaces": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:00.728: INFO: Unexpected error while creating namespace: Post "https://104.198.13.163/api/v1/namespaces": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:02.728: INFO: Unexpected error while creating namespace: Post "https://104.198.13.163/api/v1/namespaces": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:04.728: INFO: Unexpected error while creating namespace: Post "https://104.198.13.163/api/v1/namespaces": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:06.728: INFO: Unexpected error while creating namespace: Post "https://104.198.13.163/api/v1/namespaces": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:08.728: INFO: Unexpected error while creating namespace: Post "https://104.198.13.163/api/v1/namespaces": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:10.728: INFO: Unexpected error while creating namespace: Post "https://104.198.13.163/api/v1/namespaces": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:12.728: INFO: Unexpected error while creating namespace: Post "https://104.198.13.163/api/v1/namespaces": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:14.729: INFO: Unexpected error while creating namespace: Post "https://104.198.13.163/api/v1/namespaces": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:16.728: INFO: Unexpected error while creating namespace: Post "https://104.198.13.163/api/v1/namespaces": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:18.728: INFO: Unexpected error while creating namespace: Post "https://104.198.13.163/api/v1/namespaces": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:20.728: INFO: Unexpected error while creating namespace: Post "https://104.198.13.163/api/v1/namespaces": dial tcp 104.198.13.163:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 19:02:24.785 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 19:02:24.882 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work for type=NodePort test/e2e/network/loadbalancer.go:1314 STEP: creating a service esipp-1924/external-local-nodeport with type=NodePort and ExternalTrafficPolicy=Local 11/25/22 19:02:25.129 STEP: creating a pod to be part of the service external-local-nodeport 11/25/22 19:02:25.207 Nov 25 19:02:25.254: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 19:02:25.383: INFO: Found all 1 pods Nov 25 19:02:25.383: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-nodeport-wxqpj] Nov 25 19:02:25.383: INFO: Waiting up to 2m0s for pod "external-local-nodeport-wxqpj" in namespace "esipp-1924" to be "running and ready" Nov 25 19:02:25.549: INFO: Pod "external-local-nodeport-wxqpj": Phase="Pending", Reason="", readiness=false. Elapsed: 166.08702ms Nov 25 19:02:25.549: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-wxqpj' on 'bootstrap-e2e-minion-group-ft5h' to be 'Running' but was 'Pending' Nov 25 19:02:27.592: INFO: Pod "external-local-nodeport-wxqpj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20923279s Nov 25 19:02:27.592: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-wxqpj' on 'bootstrap-e2e-minion-group-ft5h' to be 'Running' but was 'Pending' Nov 25 19:02:29.615: INFO: Pod "external-local-nodeport-wxqpj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231580198s Nov 25 19:02:29.615: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-wxqpj' on 'bootstrap-e2e-minion-group-ft5h' to be 'Running' but was 'Pending' Nov 25 19:02:31.611: INFO: Pod "external-local-nodeport-wxqpj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.227661499s Nov 25 19:02:31.611: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-wxqpj' on 'bootstrap-e2e-minion-group-ft5h' to be 'Running' but was 'Pending' Nov 25 19:02:33.703: INFO: Pod "external-local-nodeport-wxqpj": Phase="Running", Reason="", readiness=false. Elapsed: 8.319918211s Nov 25 19:02:33.703: INFO: Error evaluating pod condition running and ready: pod 'external-local-nodeport-wxqpj' on 'bootstrap-e2e-minion-group-ft5h' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:30 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:30 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:25 +0000 UTC }] Nov 25 19:02:35.638: INFO: Pod "external-local-nodeport-wxqpj": Phase="Running", Reason="", readiness=false. Elapsed: 10.255446307s Nov 25 19:02:35.639: INFO: Error evaluating pod condition running and ready: pod 'external-local-nodeport-wxqpj' on 'bootstrap-e2e-minion-group-ft5h' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:30 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:30 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:25 +0000 UTC }] Nov 25 19:02:37.622: INFO: Pod "external-local-nodeport-wxqpj": Phase="Running", Reason="", readiness=true. Elapsed: 12.238605414s Nov 25 19:02:37.622: INFO: Pod "external-local-nodeport-wxqpj" satisfied condition "running and ready" Nov 25 19:02:37.622: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-nodeport-wxqpj] STEP: Performing setup for networking test in namespace esipp-1924 11/25/22 19:02:38.725 STEP: creating a selector 11/25/22 19:02:38.725 STEP: Creating the service pods in kubernetes 11/25/22 19:02:38.725 Nov 25 19:02:38.725: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 25 19:02:39.052: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-1924" to be "running and ready" Nov 25 19:02:39.114: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 61.909828ms Nov 25 19:02:39.114: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 19:02:41.167: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114529615s Nov 25 19:02:41.167: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 19:02:43.168: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115166414s Nov 25 19:02:43.168: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 19:02:45.172: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.12001167s Nov 25 19:02:45.172: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:02:47.172: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.120041463s Nov 25 19:02:47.172: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:02:49.171: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.11817065s Nov 25 19:02:49.171: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:02:51.184: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.131915357s Nov 25 19:02:51.184: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:02:53.184: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.131446916s Nov 25 19:02:53.184: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:02:55.173: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.120829783s Nov 25 19:02:55.173: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:02:57.177: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.124146701s Nov 25 19:02:57.177: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:02:59.184: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.132050955s Nov 25 19:02:59.184: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 19:03:01.175: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.122842519s Nov 25 19:03:01.175: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 25 19:03:01.175: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 25 19:03:01.251: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-1924" to be "running and ready" Nov 25 19:03:01.316: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 64.960571ms Nov 25 19:03:01.316: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:03:03.392: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.141672064s Nov 25 19:03:03.393: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:03:05.409: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.158130957s Nov 25 19:03:05.409: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:03:07.383: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 6.13233452s Nov 25 19:03:07.383: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:03:09.378: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 8.127101718s Nov 25 19:03:09.378: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:03:11.382: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 10.131179478s Nov 25 19:03:11.382: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 19:03:13.375: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 12.124430553s Nov 25 19:03:13.375: INFO: The phase of Pod netserver-1 is Running (Ready = true) Nov 25 19:03:13.375: INFO: Pod "netserver-1" satisfied condition "running and ready" Nov 25 19:03:13.437: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "esipp-1924" to be "running and ready" Nov 25 19:03:13.488: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 51.397284ms Nov 25 19:03:13.488: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 19:03:15.570: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2.133469576s Nov 25 19:03:15.570: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 19:03:17.682: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 4.245110741s Nov 25 19:03:17.682: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 19:03:19.559: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 6.122478329s Nov 25 19:03:19.559: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 19:03:21.589: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 8.151962087s Nov 25 19:03:21.589: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 19:03:23.561: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 10.123904595s Nov 25 19:03:23.561: INFO: The phase of Pod netserver-2 is Running (Ready = true) Nov 25 19:03:23.561: INFO: Pod "netserver-2" satisfied condition "running and ready" STEP: Creating test pods 11/25/22 19:03:23.621 Nov 25 19:03:23.719: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "esipp-1924" to be "running" Nov 25 19:03:23.803: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 84.013032ms Nov 25 19:03:25.866: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146822601s Nov 25 19:03:27.884: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165184247s Nov 25 19:03:29.866: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14680708s Nov 25 19:03:31.865: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.145727403s Nov 25 19:03:31.865: INFO: Pod "test-container-pod" satisfied condition "running" Nov 25 19:03:31.928: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 STEP: Getting node addresses 11/25/22 19:03:31.928 Nov 25 19:03:31.928: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes 11/25/22 19:03:32.173 Nov 25 19:03:32.406: INFO: Service node-port-service in namespace esipp-1924 found. Nov 25 19:03:32.705: INFO: Service session-affinity-service in namespace esipp-1924 found. STEP: Waiting for NodePort service to expose endpoint 11/25/22 19:03:32.753 Nov 25 19:03:33.754: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:34.754: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:35.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:36.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:37.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:38.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:39.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:40.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:41.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:42.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:43.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:44.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:45.754: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:46.754: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:47.754: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:48.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:49.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:50.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:51.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:52.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:53.754: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:54.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:55.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:56.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:57.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:58.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:03:59.753: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:04:00.754: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:04:01.754: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:04:02.754: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:04:02.808: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 19:04:02.850: INFO: Unexpected error: failed to validate endpoints for service node-port-service in namespace: esipp-1924: <*errors.errorString | 0xc0001d3930>: { s: "timed out waiting for the condition", } Nov 25 19:04:02.850: FAIL: failed to validate endpoints for service node-port-service in namespace: esipp-1924: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc001082700, 0x3c?) test/e2e/framework/network/utils.go:834 +0x545 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001330000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 19:04:02.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 19:04:02.982: INFO: Output of kubectl describe svc: Nov 25 19:04:02.982: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=esipp-1924 describe svc --namespace=esipp-1924' Nov 25 19:04:22.658: INFO: stderr: "" Nov 25 19:04:22.658: INFO: stdout: "Name: node-port-service\nNamespace: esipp-1924\nLabels: <none>\nAnnotations: <none>\nSelector: selector-535d3296-d7ec-4a93-85a3-e5e042451266=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.188.223\nIPs: 10.0.188.223\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 32367/TCP\nEndpoints: 10.64.1.60:8083,10.64.2.60:8083\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 30685/UDP\nEndpoints: 10.64.1.60:8081,10.64.2.60:8081\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents: <none>\n\n\nName: session-affinity-service\nNamespace: esipp-1924\nLabels: <none>\nAnnotations: <none>\nSelector: selector-535d3296-d7ec-4a93-85a3-e5e042451266=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.206.143\nIPs: 10.0.206.143\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 30890/TCP\nEndpoints: 10.64.1.60:8083,10.64.2.60:8083\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 31532/UDP\nEndpoints: 10.64.1.60:8081,10.64.2.60:8081\nSession Affinity: ClientIP\nExternal Traffic Policy: Cluster\nEvents: <none>\n" Nov 25 19:04:22.658: INFO: Name: node-port-service Namespace: esipp-1924 Labels: <none> Annotations: <none> Selector: selector-535d3296-d7ec-4a93-85a3-e5e042451266=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.188.223 IPs: 10.0.188.223 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 32367/TCP Endpoints: 10.64.1.60:8083,10.64.2.60:8083 Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 30685/UDP Endpoints: 10.64.1.60:8081,10.64.2.60:8081 Session Affinity: None External Traffic Policy: Cluster Events: <none> Name: session-affinity-service Namespace: esipp-1924 Labels: <none> Annotations: <none> Selector: selector-535d3296-d7ec-4a93-85a3-e5e042451266=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.206.143 IPs: 10.0.206.143 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 30890/TCP Endpoints: 10.64.1.60:8083,10.64.2.60:8083 Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 31532/UDP Endpoints: 10.64.1.60:8081,10.64.2.60:8081 Session Affinity: ClientIP External Traffic Policy: Cluster Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 19:04:22.658 STEP: Collecting events from namespace "esipp-1924". 11/25/22 19:04:22.658 STEP: Found 31 events. 11/25/22 19:04:22.722 Nov 25 19:04:22.722: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for external-local-nodeport-wxqpj: { } Scheduled: Successfully assigned esipp-1924/external-local-nodeport-wxqpj to bootstrap-e2e-minion-group-ft5h Nov 25 19:04:22.722: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-0: { } Scheduled: Successfully assigned esipp-1924/netserver-0 to bootstrap-e2e-minion-group-ft5h Nov 25 19:04:22.722: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-1: { } Scheduled: Successfully assigned esipp-1924/netserver-1 to bootstrap-e2e-minion-group-p8wv Nov 25 19:04:22.722: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-2: { } Scheduled: Successfully assigned esipp-1924/netserver-2 to bootstrap-e2e-minion-group-rvwg Nov 25 19:04:22.722: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-container-pod: { } Scheduled: Successfully assigned esipp-1924/test-container-pod to bootstrap-e2e-minion-group-rvwg Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:25 +0000 UTC - event for external-local-nodeport: {replication-controller } SuccessfulCreate: Created pod: external-local-nodeport-wxqpj Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:27 +0000 UTC - event for external-local-nodeport-wxqpj: {kubelet bootstrap-e2e-minion-group-ft5h} Started: Started container netexec Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:27 +0000 UTC - event for external-local-nodeport-wxqpj: {kubelet bootstrap-e2e-minion-group-ft5h} Created: Created container netexec Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:27 +0000 UTC - event for external-local-nodeport-wxqpj: {kubelet bootstrap-e2e-minion-group-ft5h} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:28 +0000 UTC - event for external-local-nodeport-wxqpj: {kubelet bootstrap-e2e-minion-group-ft5h} Killing: Stopping container netexec Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:30 +0000 UTC - event for external-local-nodeport-wxqpj: {kubelet bootstrap-e2e-minion-group-ft5h} Unhealthy: Readiness probe failed: Get "http://10.64.1.46:80/hostName": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:30 +0000 UTC - event for external-local-nodeport-wxqpj: {kubelet bootstrap-e2e-minion-group-ft5h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:39 +0000 UTC - event for external-local-nodeport-wxqpj: {kubelet bootstrap-e2e-minion-group-ft5h} Unhealthy: Readiness probe failed: Get "http://10.64.1.51:80/hostName": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:39 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:39 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} Created: Created container webserver Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:39 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} Started: Started container webserver Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:40 +0000 UTC - event for external-local-nodeport-wxqpj: {kubelet bootstrap-e2e-minion-group-ft5h} BackOff: Back-off restarting failed container netexec in pod external-local-nodeport-wxqpj_esipp-1924(aeeec5e0-4ff2-4e18-95f8-c577f3876a21) Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:40 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} Started: Started container webserver Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:40 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} Created: Created container webserver Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:40 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:41 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} Killing: Stopping container webserver Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:41 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} Killing: Stopping container webserver Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:42 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-ft5h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:42 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:45 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-rvwg} BackOff: Back-off restarting failed container webserver in pod netserver-2_esipp-1924(1fc590ed-7201-4359-9e61-d0035c17dfc0) Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:48 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-p8wv} Started: Started container webserver Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:48 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-p8wv} Created: Created container webserver Nov 25 19:04:22.722: INFO: At 2022-11-25 19:02:48 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-p8wv} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:04:22.722: INFO: At 2022-11-25 19:03:25 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-rvwg} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:04:22.722: INFO: At 2022-11-25 19:03:25 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-rvwg} Created: Created container webserver Nov 25 19:04:22.722: INFO: At 2022-11-25 19:03:25 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-rvwg} Started: Started container webserver Nov 25 19:04:22.788: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 19:04:22.788: INFO: external-local-nodeport-wxqpj bootstrap-e2e-minion-group-ft5h Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:25 +0000 UTC }] Nov 25 19:04:22.788: INFO: netserver-0 bootstrap-e2e-minion-group-ft5h Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:38 +0000 UTC }] Nov 25 19:04:22.788: INFO: netserver-1 bootstrap-e2e-minion-group-p8wv Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:38 +0000 UTC }] Nov 25 19:04:22.788: INFO: netserver-2 bootstrap-e2e-minion-group-rvwg Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:02:39 +0000 UTC }] Nov 25 19:04:22.788: INFO: test-container-pod bootstrap-e2e-minion-group-rvwg Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:03:23 +0000 UTC }] Nov 25 19:04:22.788: INFO: Nov 25 19:04:23.304: INFO: Logging node info for node bootstrap-e2e-master Nov 25 19:04:23.393: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 063993ed-280a-4252-9455-5251e2ae7f68 3773 0 2022-11-25 18:55:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 19:00:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:00:58 +0000 UTC,LastTransitionTime:2022-11-25 18:55:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:00:58 +0000 UTC,LastTransitionTime:2022-11-25 18:55:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:00:58 +0000 UTC,LastTransitionTime:2022-11-25 18:55:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:00:58 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:104.198.13.163,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:baec19a097c341ae8d14b5ee519a12bc,SystemUUID:baec19a0-97c3-41ae-8d14-b5ee519a12bc,BootID:cbb52bbc-4a45-4571-8271-7b01e70f9d0d,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:04:23.394: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 19:04:23.460: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 19:04:23.573: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-25 18:55:06 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:23.573: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 25 19:04:23.573: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:23.573: INFO: Container etcd-container ready: true, restart count 2 Nov 25 19:04:23.573: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:23.573: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 25 19:04:23.573: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:23.573: INFO: Container kube-apiserver ready: true, restart count 1 Nov 25 19:04:23.573: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:23.573: INFO: Container kube-scheduler ready: true, restart count 3 Nov 25 19:04:23.573: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:23.573: INFO: Container etcd-container ready: true, restart count 2 Nov 25 19:04:23.573: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:23.573: INFO: Container kube-controller-manager ready: true, restart count 4 Nov 25 19:04:23.573: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-25 18:55:06 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:23.573: INFO: Container l7-lb-controller ready: true, restart count 6 Nov 25 19:04:23.573: INFO: metadata-proxy-v0.1-sd5zx started at 2022-11-25 18:55:32 +0000 UTC (0+2 container statuses recorded) Nov 25 19:04:23.573: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:04:23.573: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:04:23.838: INFO: Latency metrics for node bootstrap-e2e-master Nov 25 19:04:23.838: INFO: Logging node info for node bootstrap-e2e-minion-group-ft5h Nov 25 19:04:23.890: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ft5h f6d0c520-a72a-4938-9464-c37052e3eead 6293 0 2022-11-25 18:55:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ft5h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-ft5h topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-9297":"bootstrap-e2e-minion-group-ft5h","csi-mock-csi-mock-volumes-9449":"csi-mock-csi-mock-volumes-9449"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 18:59:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-25 19:00:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 19:04:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-minion-group-ft5h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:00:39 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:00:39 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:00:39 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:00:39 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:00:39 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:00:39 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:00:39 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:03:16 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:03:16 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:03:16 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:03:16 +0000 UTC,LastTransitionTime:2022-11-25 18:55:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.110.94,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ft5h.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ft5h.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b840d0c08d09e2ff89c00115dd74e373,SystemUUID:b840d0c0-8d09-e2ff-89c0-0115dd74e373,BootID:2c546fd1-5b2e-4c92-9b03-025eb8882457,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-9297^c9bf751e-6cf3-11ed-bd04-1a231c4e3069],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:04:23.890: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ft5h Nov 25 19:04:23.958: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ft5h Nov 25 19:04:24.127: INFO: external-local-nodeport-wxqpj started at 2022-11-25 19:02:25 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container netexec ready: true, restart count 3 Nov 25 19:04:24.128: INFO: pod-subpath-test-inlinevolume-68pp started at 2022-11-25 19:03:49 +0000 UTC (1+2 container statuses recorded) Nov 25 19:04:24.128: INFO: Init container init-volume-inlinevolume-68pp ready: true, restart count 1 Nov 25 19:04:24.128: INFO: Container test-container-subpath-inlinevolume-68pp ready: true, restart count 2 Nov 25 19:04:24.128: INFO: Container test-container-volume-inlinevolume-68pp ready: true, restart count 1 Nov 25 19:04:24.128: INFO: pod-42d3d460-d8f7-4fac-a41e-d042e8c84b90 started at 2022-11-25 19:04:06 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:04:24.128: INFO: inclusterclient started at 2022-11-25 19:03:46 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container inclusterclient ready: true, restart count 0 Nov 25 19:04:24.128: INFO: pod-1d0699bd-0639-43ca-a18c-8aa2cc78ed27 started at 2022-11-25 19:03:57 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container write-pod ready: true, restart count 0 Nov 25 19:04:24.128: INFO: forbid-27823384-rmpz4 started at 2022-11-25 19:04:00 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container c ready: true, restart count 0 Nov 25 19:04:24.128: INFO: metrics-server-v0.5.2-867b8754b9-565bg started at 2022-11-25 18:57:46 +0000 UTC (0+2 container statuses recorded) Nov 25 19:04:24.128: INFO: Container metrics-server ready: false, restart count 4 Nov 25 19:04:24.128: INFO: Container metrics-server-nanny ready: false, restart count 5 Nov 25 19:04:24.128: INFO: pvc-volume-tester-8ldh9 started at 2022-11-25 19:03:06 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container volume-tester ready: false, restart count 0 Nov 25 19:04:24.128: INFO: external-provisioner-7rksp started at 2022-11-25 19:03:29 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container nfs-provisioner ready: true, restart count 2 Nov 25 19:04:24.128: INFO: test-hostpath-type-wflfg started at 2022-11-25 19:03:38 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 19:04:24.128: INFO: hostexec-bootstrap-e2e-minion-group-ft5h-ln9vd started at 2022-11-25 18:59:33 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 19:04:24.128: INFO: kube-proxy-bootstrap-e2e-minion-group-ft5h started at 2022-11-25 18:55:34 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container kube-proxy ready: false, restart count 3 Nov 25 19:04:24.128: INFO: metadata-proxy-v0.1-9vhzj started at 2022-11-25 18:55:34 +0000 UTC (0+2 container statuses recorded) Nov 25 19:04:24.128: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:04:24.128: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:04:24.128: INFO: netserver-0 started at 2022-11-25 18:59:13 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container webserver ready: true, restart count 2 Nov 25 19:04:24.128: INFO: hostexec-bootstrap-e2e-minion-group-ft5h-rz6vn started at 2022-11-25 18:59:31 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 19:04:24.128: INFO: konnectivity-agent-qf52c started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container konnectivity-agent ready: true, restart count 4 Nov 25 19:04:24.128: INFO: pvc-volume-tester-c2f6h started at 2022-11-25 18:59:40 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container volume-tester ready: false, restart count 0 Nov 25 19:04:24.128: INFO: net-tiers-svc-pnq45 started at 2022-11-25 19:02:25 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container netexec ready: true, restart count 1 Nov 25 19:04:24.128: INFO: csi-mockplugin-0 started at 2022-11-25 18:58:23 +0000 UTC (0+4 container statuses recorded) Nov 25 19:04:24.128: INFO: Container busybox ready: true, restart count 1 Nov 25 19:04:24.128: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 19:04:24.128: INFO: Container driver-registrar ready: true, restart count 1 Nov 25 19:04:24.128: INFO: Container mock ready: true, restart count 1 Nov 25 19:04:24.128: INFO: local-io-client started at 2022-11-25 19:04:10 +0000 UTC (1+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Init container local-io-init ready: true, restart count 0 Nov 25 19:04:24.128: INFO: Container local-io-client ready: true, restart count 0 Nov 25 19:04:24.128: INFO: external-provisioner-64cmw started at 2022-11-25 19:03:45 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 25 19:04:24.128: INFO: hostexec-bootstrap-e2e-minion-group-ft5h-sqgkv started at 2022-11-25 19:03:45 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 19:04:24.128: INFO: emptydir-io-client started at 2022-11-25 18:59:40 +0000 UTC (1+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Init container emptydir-io-init ready: true, restart count 0 Nov 25 19:04:24.128: INFO: Container emptydir-io-client ready: false, restart count 0 Nov 25 19:04:24.128: INFO: hostpath-symlink-prep-provisioning-1956 started at 2022-11-25 18:59:47 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container init-volume-provisioning-1956 ready: false, restart count 0 Nov 25 19:04:24.128: INFO: hostexec-bootstrap-e2e-minion-group-ft5h-sthmb started at 2022-11-25 19:03:58 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 19:04:24.128: INFO: netserver-0 started at 2022-11-25 19:02:38 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container webserver ready: true, restart count 1 Nov 25 19:04:24.128: INFO: netserver-0 started at 2022-11-25 19:03:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container webserver ready: true, restart count 1 Nov 25 19:04:24.128: INFO: httpd started at 2022-11-25 19:02:25 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container httpd ready: true, restart count 0 Nov 25 19:04:24.128: INFO: csi-mockplugin-0 started at 2022-11-25 19:02:58 +0000 UTC (0+3 container statuses recorded) Nov 25 19:04:24.128: INFO: Container csi-provisioner ready: false, restart count 0 Nov 25 19:04:24.128: INFO: Container driver-registrar ready: false, restart count 0 Nov 25 19:04:24.128: INFO: Container mock ready: false, restart count 0 Nov 25 19:04:24.128: INFO: test-hostpath-type-4gszr started at 2022-11-25 19:03:32 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container host-path-testing ready: true, restart count 0 Nov 25 19:04:24.128: INFO: failure-3 started at 2022-11-25 19:02:50 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container failure-3 ready: true, restart count 1 Nov 25 19:04:24.128: INFO: test-hostpath-type-7djl8 started at 2022-11-25 18:59:38 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.128: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 19:04:24.726: INFO: Latency metrics for node bootstrap-e2e-minion-group-ft5h Nov 25 19:04:24.726: INFO: Logging node info for node bootstrap-e2e-minion-group-p8wv Nov 25 19:04:24.798: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-p8wv 881c9872-bf9e-40c3-a0e6-f3f276af90f5 6259 0 2022-11-25 18:55:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-p8wv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-p8wv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-2992":"bootstrap-e2e-minion-group-p8wv","csi-hostpath-multivolume-6105":"bootstrap-e2e-minion-group-p8wv","csi-hostpath-multivolume-8134":"bootstrap-e2e-minion-group-p8wv","csi-hostpath-multivolume-8995":"bootstrap-e2e-minion-group-p8wv"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 19:00:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 19:03:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 19:04:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-minion-group-p8wv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:00:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:00:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:00:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:00:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:00:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:00:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:00:40 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:04:16 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:04:16 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:04:16 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:04:16 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.198.109.246,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-p8wv.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-p8wv.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f8ac2f5b7ee9732248672e4b22a9ad9,SystemUUID:7f8ac2f5-b7ee-9732-2486-72e4b22a9ad9,BootID:ba8ee318-1295-42a5-a59a-f3bfc254bc58,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-8995^e80f9d47-6cf3-11ed-9675-d273053079ec],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8995^e80f9d47-6cf3-11ed-9675-d273053079ec,DevicePath:,},},Config:nil,},} Nov 25 19:04:24.799: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-p8wv Nov 25 19:04:24.853: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-p8wv Nov 25 19:04:24.972: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:02:28 +0000 UTC (0+7 container statuses recorded) Nov 25 19:04:24.972: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container csi-resizer ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container hostpath ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container liveness-probe ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 25 19:04:24.972: INFO: kube-proxy-bootstrap-e2e-minion-group-p8wv started at 2022-11-25 18:55:36 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container kube-proxy ready: true, restart count 5 Nov 25 19:04:24.972: INFO: volume-prep-provisioning-9094 started at 2022-11-25 18:59:47 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container init-volume-provisioning-9094 ready: false, restart count 0 Nov 25 19:04:24.972: INFO: httpd started at 2022-11-25 18:58:21 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container httpd ready: false, restart count 5 Nov 25 19:04:24.972: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:03:53 +0000 UTC (0+7 container statuses recorded) Nov 25 19:04:24.972: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container csi-resizer ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container hostpath ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container liveness-probe ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 25 19:04:24.972: INFO: pod-2e90cef0-8649-42c5-bdd5-c250a789d319 started at 2022-11-25 19:04:20 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container write-pod ready: true, restart count 0 Nov 25 19:04:24.972: INFO: coredns-6d97d5ddb-wrz6b started at 2022-11-25 18:55:55 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container coredns ready: false, restart count 5 Nov 25 19:04:24.972: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:02:27 +0000 UTC (0+7 container statuses recorded) Nov 25 19:04:24.972: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container csi-resizer ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container hostpath ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container liveness-probe ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 25 19:04:24.972: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:59:20 +0000 UTC (0+7 container statuses recorded) Nov 25 19:04:24.972: INFO: Container csi-attacher ready: true, restart count 2 Nov 25 19:04:24.972: INFO: Container csi-provisioner ready: true, restart count 2 Nov 25 19:04:24.972: INFO: Container csi-resizer ready: true, restart count 2 Nov 25 19:04:24.972: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 25 19:04:24.972: INFO: Container hostpath ready: true, restart count 2 Nov 25 19:04:24.972: INFO: Container liveness-probe ready: true, restart count 2 Nov 25 19:04:24.972: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 25 19:04:24.972: INFO: netserver-1 started at 2022-11-25 19:03:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container webserver ready: false, restart count 2 Nov 25 19:04:24.972: INFO: pod-configmaps-7e353c13-950c-4423-9808-a6f4226d3913 started at 2022-11-25 18:59:11 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 19:04:24.972: INFO: netserver-1 started at 2022-11-25 18:59:13 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container webserver ready: true, restart count 5 Nov 25 19:04:24.972: INFO: netserver-1 started at 2022-11-25 19:02:38 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container webserver ready: true, restart count 0 Nov 25 19:04:24.972: INFO: pod-a35d1afd-b28b-4adc-bb8b-e7e46f7d6fff started at 2022-11-25 19:03:55 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container write-pod ready: true, restart count 0 Nov 25 19:04:24.972: INFO: konnectivity-agent-26n2n started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container konnectivity-agent ready: false, restart count 3 Nov 25 19:04:24.972: INFO: pod-configmaps-cff0010c-8d2d-4981-8991-10714a4dd75e started at 2022-11-25 18:58:21 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 19:04:24.972: INFO: external-local-update-gqvff started at 2022-11-25 18:59:02 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container netexec ready: true, restart count 3 Nov 25 19:04:24.972: INFO: hostexec-bootstrap-e2e-minion-group-p8wv-pzxrm started at 2022-11-25 19:02:25 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 19:04:24.972: INFO: metadata-proxy-v0.1-zw9qm started at 2022-11-25 18:55:37 +0000 UTC (0+2 container statuses recorded) Nov 25 19:04:24.972: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:04:24.972: INFO: hostexec-bootstrap-e2e-minion-group-p8wv-bds6n started at 2022-11-25 18:58:40 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 19:04:24.972: INFO: pod-8bbfb4d6-d93f-46c7-bf6b-1853fc9cc35b started at 2022-11-25 18:59:01 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:04:24.972: INFO: pod-subpath-test-preprovisionedpv-d24t started at 2022-11-25 19:02:41 +0000 UTC (1+2 container statuses recorded) Nov 25 19:04:24.972: INFO: Init container init-volume-preprovisionedpv-d24t ready: true, restart count 0 Nov 25 19:04:24.972: INFO: Container test-container-subpath-preprovisionedpv-d24t ready: true, restart count 2 Nov 25 19:04:24.972: INFO: Container test-container-volume-preprovisionedpv-d24t ready: true, restart count 2 Nov 25 19:04:24.972: INFO: hostexec-bootstrap-e2e-minion-group-p8wv-ksk87 started at 2022-11-25 18:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container agnhost-container ready: true, restart count 3 Nov 25 19:04:24.972: INFO: pod-cd008d33-bceb-4987-90b6-3a0f9c4d551d started at 2022-11-25 19:04:16 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:24.972: INFO: Container write-pod ready: true, restart count 0 Nov 25 19:04:25.500: INFO: Latency metrics for node bootstrap-e2e-minion-group-p8wv Nov 25 19:04:25.500: INFO: Logging node info for node bootstrap-e2e-minion-group-rvwg Nov 25 19:04:25.553: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-rvwg d72b04ed-8c3e-4237-a1a1-842914101de6 6169 0 2022-11-25 18:55:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-rvwg kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-rvwg topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volumeio-2766":"bootstrap-e2e-minion-group-rvwg"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 18:59:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-25 19:00:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 19:04:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-minion-group-rvwg,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:00:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:00:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:00:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:00:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:00:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:00:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:00:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:00:25 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:00:25 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:00:25 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:00:25 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.2.247,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-rvwg.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-rvwg.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dbf5cb99be0ec068c5cd2f1643938098,SystemUUID:dbf5cb99-be0e-c068-c5cd-2f1643938098,BootID:e2f727ad-5c6c-4e26-854f-4f7e80c2c71f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:04:25.554: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-rvwg Nov 25 19:04:25.616: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-rvwg Nov 25 19:04:25.708: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:03:40 +0000 UTC (0+7 container statuses recorded) Nov 25 19:04:25.708: INFO: Container csi-attacher ready: false, restart count 2 Nov 25 19:04:25.708: INFO: Container csi-provisioner ready: false, restart count 2 Nov 25 19:04:25.708: INFO: Container csi-resizer ready: false, restart count 2 Nov 25 19:04:25.708: INFO: Container csi-snapshotter ready: false, restart count 2 Nov 25 19:04:25.708: INFO: Container hostpath ready: false, restart count 2 Nov 25 19:04:25.708: INFO: Container liveness-probe ready: false, restart count 2 Nov 25 19:04:25.708: INFO: Container node-driver-registrar ready: false, restart count 2 Nov 25 19:04:25.708: INFO: netserver-2 started at 2022-11-25 19:03:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.708: INFO: Container webserver ready: false, restart count 2 Nov 25 19:04:25.708: INFO: var-expansion-0dd24479-e096-47b5-86ae-a4f4a6d0feb2 started at 2022-11-25 19:03:20 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.708: INFO: Container dapi-container ready: false, restart count 0 Nov 25 19:04:25.708: INFO: test-container-pod started at 2022-11-25 19:03:23 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.708: INFO: Container webserver ready: true, restart count 0 Nov 25 19:04:25.708: INFO: hostexec-bootstrap-e2e-minion-group-rvwg-7zc5x started at 2022-11-25 19:04:23 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.708: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 19:04:25.708: INFO: l7-default-backend-8549d69d99-vsr6h started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.708: INFO: Container default-http-backend ready: true, restart count 0 Nov 25 19:04:25.708: INFO: netserver-2 started at 2022-11-25 18:59:13 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.708: INFO: Container webserver ready: false, restart count 3 Nov 25 19:04:25.708: INFO: pod-b613c2de-9165-4dc0-b03e-fba4980e2ba0 started at 2022-11-25 18:59:47 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.708: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:04:25.708: INFO: mutability-test-js7zw started at 2022-11-25 18:58:39 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.708: INFO: Container netexec ready: true, restart count 5 Nov 25 19:04:25.708: INFO: hostexec-bootstrap-e2e-minion-group-rvwg-x62bn started at 2022-11-25 18:58:59 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.708: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 19:04:25.708: INFO: kube-dns-autoscaler-5f6455f985-5hbzc started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.708: INFO: Container autoscaler ready: true, restart count 4 Nov 25 19:04:25.708: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:58:42 +0000 UTC (0+7 container statuses recorded) Nov 25 19:04:25.708: INFO: Container csi-attacher ready: true, restart count 1 Nov 25 19:04:25.708: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 19:04:25.708: INFO: Container csi-resizer ready: true, restart count 1 Nov 25 19:04:25.708: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 25 19:04:25.708: INFO: Container hostpath ready: true, restart count 1 Nov 25 19:04:25.709: INFO: Container liveness-probe ready: true, restart count 1 Nov 25 19:04:25.709: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 25 19:04:25.709: INFO: hostexec-bootstrap-e2e-minion-group-rvwg-xrc76 started at 2022-11-25 19:03:48 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.709: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 19:04:25.709: INFO: hostexec-bootstrap-e2e-minion-group-rvwg-fkmcp started at 2022-11-25 18:59:11 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.709: INFO: Container agnhost-container ready: false, restart count 1 Nov 25 19:04:25.709: INFO: pod-72691924-c01b-4050-9991-a15a20879782 started at 2022-11-25 18:59:17 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.709: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:04:25.709: INFO: metadata-proxy-v0.1-szbqx started at 2022-11-25 18:55:37 +0000 UTC (0+2 container statuses recorded) Nov 25 19:04:25.709: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:04:25.709: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:04:25.709: INFO: coredns-6d97d5ddb-l2w5l started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.709: INFO: Container coredns ready: false, restart count 4 Nov 25 19:04:25.709: INFO: kube-proxy-bootstrap-e2e-minion-group-rvwg started at 2022-11-25 18:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.709: INFO: Container kube-proxy ready: false, restart count 5 Nov 25 19:04:25.709: INFO: pod-subpath-test-preprovisionedpv-9jlj started at 2022-11-25 19:03:57 +0000 UTC (1+2 container statuses recorded) Nov 25 19:04:25.709: INFO: Init container init-volume-preprovisionedpv-9jlj ready: true, restart count 1 Nov 25 19:04:25.709: INFO: Container test-container-subpath-preprovisionedpv-9jlj ready: false, restart count 2 Nov 25 19:04:25.709: INFO: Container test-container-volume-preprovisionedpv-9jlj ready: true, restart count 1 Nov 25 19:04:25.709: INFO: pod-a42d737b-fb10-4b6c-b071-ba961690b9ce started at 2022-11-25 18:59:38 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.709: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:04:25.709: INFO: volume-snapshot-controller-0 started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.709: INFO: Container volume-snapshot-controller ready: true, restart count 5 Nov 25 19:04:25.709: INFO: konnectivity-agent-9br57 started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.709: INFO: Container konnectivity-agent ready: false, restart count 5 Nov 25 19:04:25.709: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:03:42 +0000 UTC (0+7 container statuses recorded) Nov 25 19:04:25.709: INFO: Container csi-attacher ready: false, restart count 2 Nov 25 19:04:25.709: INFO: Container csi-provisioner ready: false, restart count 2 Nov 25 19:04:25.709: INFO: Container csi-resizer ready: false, restart count 2 Nov 25 19:04:25.709: INFO: Container csi-snapshotter ready: false, restart count 2 Nov 25 19:04:25.709: INFO: Container hostpath ready: false, restart count 2 Nov 25 19:04:25.709: INFO: Container liveness-probe ready: false, restart count 2 Nov 25 19:04:25.709: INFO: Container node-driver-registrar ready: false, restart count 2 Nov 25 19:04:25.709: INFO: netserver-2 started at 2022-11-25 19:02:39 +0000 UTC (0+1 container statuses recorded) Nov 25 19:04:25.709: INFO: Container webserver ready: false, restart count 3 Nov 25 19:04:26.057: INFO: Latency metrics for node bootstrap-e2e-minion-group-rvwg [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-1924" for this suite. 11/25/22 19:04:26.057
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfrom\spods$'
test/e2e/network/loadbalancer.go:1429 k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1429 +0xddfrom junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 19:15:22.407 Nov 25 19:15:22.407: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 19:15:22.409 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 19:15:22.611 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 19:15:22.703 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work from pods test/e2e/network/loadbalancer.go:1422 STEP: creating a service esipp-6235/external-local-pods with type=LoadBalancer 11/25/22 19:15:22.905 STEP: setting ExternalTrafficPolicy=Local 11/25/22 19:15:22.905 STEP: waiting for loadbalancer for service esipp-6235/external-local-pods 11/25/22 19:15:23.052 Nov 25 19:15:23.052: INFO: Waiting up to 15m0s for service "external-local-pods" to have a LoadBalancer ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 5m0.499s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 4m59.854s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 5m20.501s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 5m20.003s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 5m19.856s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:20:51.165: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:20:53.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:20:55.165: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:20:57.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:20:59.165: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:01.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 5m40.503s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 5m40.005s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 5m39.858s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:21:03.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:05.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:07.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:09.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:11.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:13.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:15.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:17.167: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:19.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:21.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 6m0.505s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 6m0.006s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 5m59.86s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:21:23.165: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:25.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:27.168: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:29.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:31.165: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:33.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:35.165: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:37.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:39.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:41.165: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 6m20.507s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 6m20.009s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 6m19.862s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:21:43.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:45.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:47.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:49.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:51.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:53.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:55.165: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:57.169: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:21:59.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:22:01.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 6m40.51s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 6m40.011s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 6m39.864s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:22:03.165: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:22:05.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:22:07.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:22:09.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:22:11.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:22:13.166: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://104.198.13.163/api/v1/namespaces/esipp-6235/services/external-local-pods": dial tcp 104.198.13.163:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 7m0.512s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 7m0.013s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 6m59.866s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 7m20.514s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 7m20.015s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 7m19.868s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 7m40.516s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 7m40.018s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 7m39.871s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 8m0.518s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 8m0.019s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 7m59.872s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 8m20.52s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 8m20.022s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 8m19.875s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 8m40.522s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 8m40.024s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 8m39.877s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 9m0.524s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 9m0.026s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 8m59.879s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 9m20.526s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 9m20.027s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 9m19.881s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 9m40.528s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 9m40.03s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 9m39.883s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 10m0.53s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 10m0.032s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 9m59.885s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 10m20.532s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 10m20.034s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 10m19.887s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 10m40.535s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 10m40.037s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 10m39.89s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 11m0.538s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 11m0.04s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 10m59.893s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 11m20.541s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 11m20.042s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 11m19.896s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 11m40.543s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 11m40.045s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 11m39.898s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc001710000, 0xc0011ef000) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc0030ee300, 0xc0011ef000, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003124000?}, 0xc0011ef000?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003124000, 0xc0011ef000) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc0057b2a20?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc001b219b0, 0xc0011eef00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0000da940, 0xc0011eee00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0011eee00, {0x7fad100, 0xc0000da940}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc001b219e0, 0xc0011eee00, {0x7f2ee8027108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc001b219e0, 0xc0011eee00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0011eec00, {0x7fe0bc8, 0xc0000820e0}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0011eec00, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*services).Get(0xc0012a0420, {0x7fe0bc8, 0xc0000820e0}, {0x75fa500, 0x13}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/service.go:79 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition.func1() test/e2e/framework/service/jig.go:620 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc000f54240?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 12m0.546s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 12m0.048s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 11m59.901s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select, 2 minutes] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc001710000, 0xc0011ef000) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc0030ee300, 0xc0011ef000, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003124000?}, 0xc0011ef000?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003124000, 0xc0011ef000) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc0057b2a20?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc001b219b0, 0xc0011eef00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0000da940, 0xc0011eee00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0011eee00, {0x7fad100, 0xc0000da940}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc001b219e0, 0xc0011eee00, {0x7f2ee8027108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc001b219e0, 0xc0011eee00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0011eec00, {0x7fe0bc8, 0xc0000820e0}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0011eec00, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*services).Get(0xc0012a0420, {0x7fe0bc8, 0xc0000820e0}, {0x75fa500, 0x13}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/service.go:79 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition.func1() test/e2e/framework/service/jig.go:620 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc000f54240?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 12m20.548s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 12m20.05s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 12m19.903s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 12m40.552s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 12m40.053s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 12m39.906s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 13m0.554s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 13m0.055s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 12m59.909s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 13m20.556s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 13m20.057s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 13m19.91s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 13m40.558s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 13m40.059s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 13m39.913s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 14m0.56s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 14m0.062s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 13m59.915s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 14m20.562s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 14m20.064s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 14m19.917s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 14m40.564s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 14m40.066s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 14m39.919s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #24 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 15m0.566s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 15m0.068s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-6235/external-local-pods (Step Runtime: 14m59.921s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 4047 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0047ebcf8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00299a4e0?, 0xc004295a40?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000f541f0?, 0x7fa7740?, 0xc00019ed00?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc003795b80, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc003795b80, 0x43?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateLoadBalancerService(0xc003795b80, 0x6aba880?, 0xc004295cf0) test/e2e/framework/service/jig.go:261 > k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).CreateOnlyLocalLoadBalancerService(0xc003795b80, 0xc0035b9380?, 0x1, 0xa?) test/e2e/framework/service/jig.go:222 > k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1428 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0023a3800}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:30:23.208: INFO: Unexpected error: <*fmt.wrapError | 0xc0035480a0>: { msg: "timed out waiting for service \"external-local-pods\" to have a load balancer: timed out waiting for the condition", err: <*errors.errorString | 0xc000195d80>{ s: "timed out waiting for the condition", }, } Nov 25 19:30:23.208: FAIL: timed out waiting for service "external-local-pods" to have a load balancer: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1429 +0xdd [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 19:30:23.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 19:30:23.291: INFO: Output of kubectl describe svc: Nov 25 19:30:23.291: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=esipp-6235 describe svc --namespace=esipp-6235' Nov 25 19:30:23.617: INFO: stderr: "" Nov 25 19:30:23.617: INFO: stdout: "Name: external-local-pods\nNamespace: esipp-6235\nLabels: testid=external-local-pods-d9d465bf-cd4d-4775-aaf0-95a02b3b1794\nAnnotations: <none>\nSelector: testid=external-local-pods-d9d465bf-cd4d-4775-aaf0-95a02b3b1794\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.5.217\nIPs: 10.0.5.217\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nNodePort: <unset> 30762/TCP\nEndpoints: <none>\nSession Affinity: None\nExternal Traffic Policy: Local\nHealthCheck NodePort: 31494\nEvents: <none>\n" Nov 25 19:30:23.617: INFO: Name: external-local-pods Namespace: esipp-6235 Labels: testid=external-local-pods-d9d465bf-cd4d-4775-aaf0-95a02b3b1794 Annotations: <none> Selector: testid=external-local-pods-d9d465bf-cd4d-4775-aaf0-95a02b3b1794 Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.5.217 IPs: 10.0.5.217 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 30762/TCP Endpoints: <none> Session Affinity: None External Traffic Policy: Local HealthCheck NodePort: 31494 Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 19:30:23.617 STEP: Collecting events from namespace "esipp-6235". 11/25/22 19:30:23.617 STEP: Found 0 events. 11/25/22 19:30:23.657 Nov 25 19:30:23.698: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 19:30:23.698: INFO: Nov 25 19:30:23.746: INFO: Logging node info for node bootstrap-e2e-master Nov 25 19:30:23.788: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 063993ed-280a-4252-9455-5251e2ae7f68 13911 0 2022-11-25 18:55:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 19:27:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:27:38 +0000 UTC,LastTransitionTime:2022-11-25 18:55:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:27:38 +0000 UTC,LastTransitionTime:2022-11-25 18:55:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:27:38 +0000 UTC,LastTransitionTime:2022-11-25 18:55:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:27:38 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:104.198.13.163,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:baec19a097c341ae8d14b5ee519a12bc,SystemUUID:baec19a0-97c3-41ae-8d14-b5ee519a12bc,BootID:cbb52bbc-4a45-4571-8271-7b01e70f9d0d,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:30:23.788: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 19:30:23.833: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 19:30:23.890: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:23.890: INFO: Container etcd-container ready: true, restart count 5 Nov 25 19:30:23.890: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:23.890: INFO: Container kube-controller-manager ready: false, restart count 8 Nov 25 19:30:23.890: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-25 18:55:06 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:23.890: INFO: Container l7-lb-controller ready: false, restart count 9 Nov 25 19:30:23.890: INFO: metadata-proxy-v0.1-sd5zx started at 2022-11-25 18:55:32 +0000 UTC (0+2 container statuses recorded) Nov 25 19:30:23.890: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:30:23.890: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:30:23.890: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-25 18:55:06 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:23.890: INFO: Container kube-addon-manager ready: true, restart count 4 Nov 25 19:30:23.890: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:23.890: INFO: Container etcd-container ready: true, restart count 6 Nov 25 19:30:23.890: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:23.890: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 25 19:30:23.890: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:23.890: INFO: Container kube-apiserver ready: true, restart count 3 Nov 25 19:30:23.890: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-25 18:54:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:23.890: INFO: Container kube-scheduler ready: false, restart count 7 Nov 25 19:30:24.071: INFO: Latency metrics for node bootstrap-e2e-master Nov 25 19:30:24.071: INFO: Logging node info for node bootstrap-e2e-minion-group-ft5h Nov 25 19:30:24.112: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ft5h f6d0c520-a72a-4938-9464-c37052e3eead 14129 0 2022-11-25 18:55:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ft5h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-ft5h topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-2558":"bootstrap-e2e-minion-group-ft5h","csi-hostpath-provisioning-4022":"bootstrap-e2e-minion-group-ft5h","csi-hostpath-provisioning-8147":"bootstrap-e2e-minion-group-ft5h","csi-mock-csi-mock-volumes-4834":"bootstrap-e2e-minion-group-ft5h"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 19:25:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 19:26:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 19:30:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-minion-group-ft5h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:25:43 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:25:43 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:25:43 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:25:43 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:25:43 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:25:43 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:25:43 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:27:38 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:27:38 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:27:38 +0000 UTC,LastTransitionTime:2022-11-25 18:55:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:27:38 +0000 UTC,LastTransitionTime:2022-11-25 18:55:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.110.94,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ft5h.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ft5h.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b840d0c08d09e2ff89c00115dd74e373,SystemUUID:b840d0c0-8d09-e2ff-89c0-0115dd74e373,BootID:2c546fd1-5b2e-4c92-9b03-025eb8882457,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-8147^25a6945e-6cf4-11ed-85c7-f6a638f4faa2],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-8147^25a6945e-6cf4-11ed-85c7-f6a638f4faa2,DevicePath:,},},Config:nil,},} Nov 25 19:30:24.113: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ft5h Nov 25 19:30:24.157: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ft5h Nov 25 19:30:24.264: INFO: metrics-server-v0.5.2-867b8754b9-565bg started at 2022-11-25 18:57:46 +0000 UTC (0+2 container statuses recorded) Nov 25 19:30:24.264: INFO: Container metrics-server ready: false, restart count 9 Nov 25 19:30:24.264: INFO: Container metrics-server-nanny ready: false, restart count 10 Nov 25 19:30:24.264: INFO: pod-configmaps-291bd4a4-c487-4adc-aac5-167c598efec8 started at 2022-11-25 19:14:55 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 19:30:24.264: INFO: inclusterclient started at 2022-11-25 19:03:46 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container inclusterclient ready: false, restart count 0 Nov 25 19:30:24.264: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:09:29 +0000 UTC (0+7 container statuses recorded) Nov 25 19:30:24.264: INFO: Container csi-attacher ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container csi-provisioner ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container csi-resizer ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container csi-snapshotter ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container hostpath ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container liveness-probe ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container node-driver-registrar ready: false, restart count 6 Nov 25 19:30:24.264: INFO: csi-mockplugin-0 started at 2022-11-25 19:14:19 +0000 UTC (0+4 container statuses recorded) Nov 25 19:30:24.264: INFO: Container busybox ready: false, restart count 6 Nov 25 19:30:24.264: INFO: Container csi-provisioner ready: false, restart count 6 Nov 25 19:30:24.264: INFO: Container driver-registrar ready: false, restart count 6 Nov 25 19:30:24.264: INFO: Container mock ready: false, restart count 6 Nov 25 19:30:24.264: INFO: kube-proxy-bootstrap-e2e-minion-group-ft5h started at 2022-11-25 18:55:34 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container kube-proxy ready: false, restart count 9 Nov 25 19:30:24.264: INFO: metadata-proxy-v0.1-9vhzj started at 2022-11-25 18:55:34 +0000 UTC (0+2 container statuses recorded) Nov 25 19:30:24.264: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:30:24.264: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:30:24.264: INFO: netserver-0 started at 2022-11-25 18:59:13 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container webserver ready: true, restart count 7 Nov 25 19:30:24.264: INFO: hostexec-bootstrap-e2e-minion-group-ft5h-rz6vn started at 2022-11-25 18:59:31 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container agnhost-container ready: true, restart count 6 Nov 25 19:30:24.264: INFO: hostexec-bootstrap-e2e-minion-group-ft5h-ln9vd started at 2022-11-25 18:59:33 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container agnhost-container ready: true, restart count 6 Nov 25 19:30:24.264: INFO: ss-0 started at 2022-11-25 19:09:24 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container webserver ready: true, restart count 4 Nov 25 19:30:24.264: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:05:43 +0000 UTC (0+7 container statuses recorded) Nov 25 19:30:24.264: INFO: Container csi-attacher ready: false, restart count 6 Nov 25 19:30:24.264: INFO: Container csi-provisioner ready: false, restart count 6 Nov 25 19:30:24.264: INFO: Container csi-resizer ready: false, restart count 6 Nov 25 19:30:24.264: INFO: Container csi-snapshotter ready: false, restart count 6 Nov 25 19:30:24.264: INFO: Container hostpath ready: false, restart count 6 Nov 25 19:30:24.264: INFO: Container liveness-probe ready: false, restart count 6 Nov 25 19:30:24.264: INFO: Container node-driver-registrar ready: false, restart count 6 Nov 25 19:30:24.264: INFO: konnectivity-agent-qf52c started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container konnectivity-agent ready: true, restart count 9 Nov 25 19:30:24.264: INFO: pvc-volume-tester-c2f6h started at 2022-11-25 18:59:40 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container volume-tester ready: false, restart count 0 Nov 25 19:30:24.264: INFO: pod-secrets-832ca89d-3703-463e-8f67-c99ff7b3d32c started at 2022-11-25 19:14:48 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container creates-volume-test ready: false, restart count 0 Nov 25 19:30:24.264: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:05:39 +0000 UTC (0+7 container statuses recorded) Nov 25 19:30:24.264: INFO: Container csi-attacher ready: true, restart count 5 Nov 25 19:30:24.264: INFO: Container csi-provisioner ready: true, restart count 5 Nov 25 19:30:24.264: INFO: Container csi-resizer ready: true, restart count 5 Nov 25 19:30:24.264: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 25 19:30:24.264: INFO: Container hostpath ready: true, restart count 5 Nov 25 19:30:24.264: INFO: Container liveness-probe ready: true, restart count 5 Nov 25 19:30:24.264: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 25 19:30:24.264: INFO: csi-mockplugin-0 started at 2022-11-25 19:04:41 +0000 UTC (0+4 container statuses recorded) Nov 25 19:30:24.264: INFO: Container busybox ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container csi-provisioner ready: false, restart count 6 Nov 25 19:30:24.264: INFO: Container driver-registrar ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container mock ready: false, restart count 7 Nov 25 19:30:24.264: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:15:35 +0000 UTC (0+7 container statuses recorded) Nov 25 19:30:24.264: INFO: Container csi-attacher ready: true, restart count 5 Nov 25 19:30:24.264: INFO: Container csi-provisioner ready: true, restart count 5 Nov 25 19:30:24.264: INFO: Container csi-resizer ready: true, restart count 5 Nov 25 19:30:24.264: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 25 19:30:24.264: INFO: Container hostpath ready: true, restart count 5 Nov 25 19:30:24.264: INFO: Container liveness-probe ready: true, restart count 5 Nov 25 19:30:24.264: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 25 19:30:24.264: INFO: csi-mockplugin-0 started at 2022-11-25 18:58:23 +0000 UTC (0+4 container statuses recorded) Nov 25 19:30:24.264: INFO: Container busybox ready: false, restart count 8 Nov 25 19:30:24.264: INFO: Container csi-provisioner ready: false, restart count 9 Nov 25 19:30:24.264: INFO: Container driver-registrar ready: false, restart count 9 Nov 25 19:30:24.264: INFO: Container mock ready: false, restart count 9 Nov 25 19:30:24.264: INFO: back-off-cap started at 2022-11-25 19:04:55 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container back-off-cap ready: false, restart count 9 Nov 25 19:30:24.264: INFO: pod-6de8dace-638d-4aa0-84cf-fc26c6af930b started at 2022-11-25 19:15:05 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:30:24.264: INFO: execpod-dropsgh2h started at 2022-11-25 19:14:41 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container agnhost-container ready: true, restart count 3 Nov 25 19:30:24.264: INFO: emptydir-io-client started at 2022-11-25 18:59:40 +0000 UTC (1+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Init container emptydir-io-init ready: true, restart count 0 Nov 25 19:30:24.264: INFO: Container emptydir-io-client ready: false, restart count 0 Nov 25 19:30:24.264: INFO: hostpath-symlink-prep-provisioning-1956 started at 2022-11-25 18:59:47 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container init-volume-provisioning-1956 ready: false, restart count 0 Nov 25 19:30:24.264: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:08:48 +0000 UTC (0+7 container statuses recorded) Nov 25 19:30:24.264: INFO: Container csi-attacher ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container csi-provisioner ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container csi-resizer ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container csi-snapshotter ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container hostpath ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container liveness-probe ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container node-driver-registrar ready: false, restart count 7 Nov 25 19:30:24.264: INFO: csi-mockplugin-0 started at 2022-11-25 19:02:58 +0000 UTC (0+3 container statuses recorded) Nov 25 19:30:24.264: INFO: Container csi-provisioner ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container driver-registrar ready: false, restart count 7 Nov 25 19:30:24.264: INFO: Container mock ready: false, restart count 7 Nov 25 19:30:24.264: INFO: lb-sourcerange-f26pd started at 2022-11-25 19:14:49 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container netexec ready: true, restart count 6 Nov 25 19:30:24.264: INFO: netserver-0 started at 2022-11-25 19:07:53 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container webserver ready: true, restart count 7 Nov 25 19:30:24.264: INFO: test-hostpath-type-7djl8 started at 2022-11-25 18:59:38 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 19:30:24.264: INFO: pod-subpath-test-dynamicpv-nplw started at 2022-11-25 19:05:39 +0000 UTC (1+2 container statuses recorded) Nov 25 19:30:24.264: INFO: Init container init-volume-dynamicpv-nplw ready: true, restart count 0 Nov 25 19:30:24.264: INFO: Container test-container-subpath-dynamicpv-nplw ready: false, restart count 3 Nov 25 19:30:24.264: INFO: Container test-container-volume-dynamicpv-nplw ready: false, restart count 0 Nov 25 19:30:24.264: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:05:32 +0000 UTC (0+7 container statuses recorded) Nov 25 19:30:24.264: INFO: Container csi-attacher ready: true, restart count 6 Nov 25 19:30:24.264: INFO: Container csi-provisioner ready: true, restart count 6 Nov 25 19:30:24.264: INFO: Container csi-resizer ready: true, restart count 6 Nov 25 19:30:24.264: INFO: Container csi-snapshotter ready: true, restart count 6 Nov 25 19:30:24.264: INFO: Container hostpath ready: true, restart count 6 Nov 25 19:30:24.264: INFO: Container liveness-probe ready: true, restart count 6 Nov 25 19:30:24.264: INFO: Container node-driver-registrar ready: true, restart count 6 Nov 25 19:30:24.264: INFO: pod-secrets-094dcf64-9945-44b2-9b52-1c7154bdebc2 started at 2022-11-25 19:14:47 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container creates-volume-test ready: false, restart count 0 Nov 25 19:30:24.264: INFO: csi-mockplugin-0 started at 2022-11-25 19:04:29 +0000 UTC (0+3 container statuses recorded) Nov 25 19:30:24.264: INFO: Container csi-provisioner ready: true, restart count 6 Nov 25 19:30:24.264: INFO: Container driver-registrar ready: true, restart count 6 Nov 25 19:30:24.264: INFO: Container mock ready: true, restart count 6 Nov 25 19:30:24.264: INFO: csi-mockplugin-attacher-0 started at 2022-11-25 19:04:29 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.264: INFO: Container csi-attacher ready: false, restart count 6 Nov 25 19:30:24.587: INFO: Latency metrics for node bootstrap-e2e-minion-group-ft5h Nov 25 19:30:24.587: INFO: Logging node info for node bootstrap-e2e-minion-group-p8wv Nov 25 19:30:24.630: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-p8wv 881c9872-bf9e-40c3-a0e6-f3f276af90f5 14128 0 2022-11-25 18:55:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-p8wv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-p8wv topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 19:14:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-25 19:25:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 19:30:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-minion-group-p8wv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:25:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:25:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:25:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:25:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:25:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:25:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:25:41 +0000 UTC,LastTransitionTime:2022-11-25 18:55:39 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:30:18 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:30:18 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:30:18 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:30:18 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.198.109.246,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-p8wv.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-p8wv.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f8ac2f5b7ee9732248672e4b22a9ad9,SystemUUID:7f8ac2f5-b7ee-9732-2486-72e4b22a9ad9,BootID:ba8ee318-1295-42a5-a59a-f3bfc254bc58,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:30:24.630: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-p8wv Nov 25 19:30:24.681: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-p8wv Nov 25 19:30:24.750: INFO: coredns-6d97d5ddb-wrz6b started at 2022-11-25 18:55:55 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container coredns ready: false, restart count 10 Nov 25 19:30:24.750: INFO: httpd started at 2022-11-25 18:58:21 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container httpd ready: false, restart count 13 Nov 25 19:30:24.750: INFO: csi-mockplugin-0 started at 2022-11-25 19:14:27 +0000 UTC (0+4 container statuses recorded) Nov 25 19:30:24.750: INFO: Container busybox ready: false, restart count 5 Nov 25 19:30:24.750: INFO: Container csi-provisioner ready: false, restart count 5 Nov 25 19:30:24.750: INFO: Container driver-registrar ready: false, restart count 5 Nov 25 19:30:24.750: INFO: Container mock ready: false, restart count 5 Nov 25 19:30:24.750: INFO: pod-configmaps-7e353c13-950c-4423-9808-a6f4226d3913 started at 2022-11-25 18:59:11 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 19:30:24.750: INFO: netserver-1 started at 2022-11-25 18:59:13 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container webserver ready: true, restart count 9 Nov 25 19:30:24.750: INFO: ss-1 started at 2022-11-25 19:10:29 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container webserver ready: true, restart count 5 Nov 25 19:30:24.750: INFO: konnectivity-agent-26n2n started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container konnectivity-agent ready: true, restart count 9 Nov 25 19:30:24.750: INFO: pod-configmaps-cff0010c-8d2d-4981-8991-10714a4dd75e started at 2022-11-25 18:58:21 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 19:30:24.750: INFO: metadata-proxy-v0.1-zw9qm started at 2022-11-25 18:55:37 +0000 UTC (0+2 container statuses recorded) Nov 25 19:30:24.750: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:30:24.750: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:30:24.750: INFO: hostexec-bootstrap-e2e-minion-group-p8wv-bds6n started at 2022-11-25 18:58:40 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container agnhost-container ready: true, restart count 6 Nov 25 19:30:24.750: INFO: external-local-update-gqvff started at 2022-11-25 18:59:02 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container netexec ready: true, restart count 9 Nov 25 19:30:24.750: INFO: pod-8bbfb4d6-d93f-46c7-bf6b-1853fc9cc35b started at 2022-11-25 18:59:01 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:30:24.750: INFO: csi-mockplugin-0 started at 2022-11-25 19:07:58 +0000 UTC (0+4 container statuses recorded) Nov 25 19:30:24.750: INFO: Container busybox ready: false, restart count 6 Nov 25 19:30:24.750: INFO: Container csi-provisioner ready: false, restart count 7 Nov 25 19:30:24.750: INFO: Container driver-registrar ready: false, restart count 7 Nov 25 19:30:24.750: INFO: Container mock ready: false, restart count 7 Nov 25 19:30:24.750: INFO: pod-configmaps-e12f18f3-6d62-4b7c-875c-989009ce82ac started at 2022-11-25 19:14:17 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 19:30:24.750: INFO: netserver-1 started at 2022-11-25 19:07:53 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container webserver ready: false, restart count 7 Nov 25 19:30:24.750: INFO: hostexec-bootstrap-e2e-minion-group-p8wv-ksk87 started at 2022-11-25 18:59:28 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container agnhost-container ready: false, restart count 8 Nov 25 19:30:24.750: INFO: kube-proxy-bootstrap-e2e-minion-group-p8wv started at 2022-11-25 18:55:36 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container kube-proxy ready: false, restart count 9 Nov 25 19:30:24.750: INFO: volume-prep-provisioning-9094 started at 2022-11-25 18:59:47 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container init-volume-provisioning-9094 ready: false, restart count 0 Nov 25 19:30:24.750: INFO: pod-secrets-b0320515-6b7a-4bc6-b46b-a61d623bd11e started at 2022-11-25 19:14:18 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:24.750: INFO: Container creates-volume-test ready: false, restart count 0 Nov 25 19:30:24.946: INFO: Latency metrics for node bootstrap-e2e-minion-group-p8wv Nov 25 19:30:24.946: INFO: Logging node info for node bootstrap-e2e-minion-group-rvwg Nov 25 19:30:24.988: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-rvwg d72b04ed-8c3e-4237-a1a1-842914101de6 13917 0 2022-11-25 18:55:36 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-rvwg kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-rvwg topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volumeio-2766":"bootstrap-e2e-minion-group-rvwg"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 19:08:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-25 19:25:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 19:27:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-protobuf/us-west1-b/bootstrap-e2e-minion-group-rvwg,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:25:46 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:25:46 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:25:46 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:25:46 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:25:46 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:25:46 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:25:46 +0000 UTC,LastTransitionTime:2022-11-25 18:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 18:55:51 +0000 UTC,LastTransitionTime:2022-11-25 18:55:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:27:30 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:27:30 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:27:30 +0000 UTC,LastTransitionTime:2022-11-25 18:55:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:27:30 +0000 UTC,LastTransitionTime:2022-11-25 18:55:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.2.247,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-rvwg.c.k8s-jkns-gci-gce-protobuf.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-rvwg.c.k8s-jkns-gci-gce-protobuf.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dbf5cb99be0ec068c5cd2f1643938098,SystemUUID:dbf5cb99-be0e-c068-c5cd-2f1643938098,BootID:e2f727ad-5c6c-4e26-854f-4f7e80c2c71f,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:30:24.989: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-rvwg Nov 25 19:30:25.033: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-rvwg Nov 25 19:30:25.097: INFO: l7-default-backend-8549d69d99-vsr6h started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container default-http-backend ready: true, restart count 0 Nov 25 19:30:25.098: INFO: netserver-2 started at 2022-11-25 18:59:13 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container webserver ready: false, restart count 9 Nov 25 19:30:25.098: INFO: pod-b613c2de-9165-4dc0-b03e-fba4980e2ba0 started at 2022-11-25 18:59:47 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:30:25.098: INFO: hostexec-bootstrap-e2e-minion-group-rvwg-7tzv9 started at 2022-11-25 19:15:35 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container agnhost-container ready: false, restart count 6 Nov 25 19:30:25.098: INFO: hostexec-bootstrap-e2e-minion-group-rvwg-x62bn started at 2022-11-25 18:58:59 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container agnhost-container ready: true, restart count 7 Nov 25 19:30:25.098: INFO: netserver-2 started at 2022-11-25 19:07:53 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container webserver ready: false, restart count 8 Nov 25 19:30:25.098: INFO: kube-dns-autoscaler-5f6455f985-5hbzc started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container autoscaler ready: false, restart count 9 Nov 25 19:30:25.098: INFO: csi-hostpathplugin-0 started at 2022-11-25 18:58:42 +0000 UTC (0+7 container statuses recorded) Nov 25 19:30:25.098: INFO: Container csi-attacher ready: true, restart count 8 Nov 25 19:30:25.098: INFO: Container csi-provisioner ready: true, restart count 8 Nov 25 19:30:25.098: INFO: Container csi-resizer ready: true, restart count 8 Nov 25 19:30:25.098: INFO: Container csi-snapshotter ready: true, restart count 8 Nov 25 19:30:25.098: INFO: Container hostpath ready: true, restart count 8 Nov 25 19:30:25.098: INFO: Container liveness-probe ready: true, restart count 8 Nov 25 19:30:25.098: INFO: Container node-driver-registrar ready: true, restart count 8 Nov 25 19:30:25.098: INFO: ss-2 started at 2022-11-25 19:10:44 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container webserver ready: true, restart count 4 Nov 25 19:30:25.098: INFO: mutability-test-2qn75 started at 2022-11-25 19:15:10 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container netexec ready: false, restart count 7 Nov 25 19:30:25.098: INFO: test-hostpath-type-gdrz6 started at 2022-11-25 19:15:23 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container host-path-sh-testing ready: false, restart count 0 Nov 25 19:30:25.098: INFO: metadata-proxy-v0.1-szbqx started at 2022-11-25 18:55:37 +0000 UTC (0+2 container statuses recorded) Nov 25 19:30:25.098: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:30:25.098: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:30:25.098: INFO: coredns-6d97d5ddb-l2w5l started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container coredns ready: false, restart count 9 Nov 25 19:30:25.098: INFO: csi-hostpathplugin-0 started at 2022-11-25 19:07:57 +0000 UTC (0+7 container statuses recorded) Nov 25 19:30:25.098: INFO: Container csi-attacher ready: false, restart count 6 Nov 25 19:30:25.098: INFO: Container csi-provisioner ready: false, restart count 6 Nov 25 19:30:25.098: INFO: Container csi-resizer ready: false, restart count 6 Nov 25 19:30:25.098: INFO: Container csi-snapshotter ready: false, restart count 6 Nov 25 19:30:25.098: INFO: Container hostpath ready: false, restart count 6 Nov 25 19:30:25.098: INFO: Container liveness-probe ready: false, restart count 6 Nov 25 19:30:25.098: INFO: Container node-driver-registrar ready: false, restart count 6 Nov 25 19:30:25.098: INFO: hostexec-bootstrap-e2e-minion-group-rvwg-fkmcp started at 2022-11-25 18:59:11 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container agnhost-container ready: false, restart count 6 Nov 25 19:30:25.098: INFO: pod-72691924-c01b-4050-9991-a15a20879782 started at 2022-11-25 18:59:17 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:30:25.098: INFO: csi-mockplugin-0 started at 2022-11-25 19:08:38 +0000 UTC (0+4 container statuses recorded) Nov 25 19:30:25.098: INFO: Container busybox ready: false, restart count 7 Nov 25 19:30:25.098: INFO: Container csi-provisioner ready: false, restart count 7 Nov 25 19:30:25.098: INFO: Container driver-registrar ready: false, restart count 6 Nov 25 19:30:25.098: INFO: Container mock ready: false, restart count 6 Nov 25 19:30:25.098: INFO: kube-proxy-bootstrap-e2e-minion-group-rvwg started at 2022-11-25 18:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container kube-proxy ready: false, restart count 9 Nov 25 19:30:25.098: INFO: volume-snapshot-controller-0 started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container volume-snapshot-controller ready: true, restart count 9 Nov 25 19:30:25.098: INFO: konnectivity-agent-9br57 started at 2022-11-25 18:55:51 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container konnectivity-agent ready: false, restart count 9 Nov 25 19:30:25.098: INFO: execpod-accept95kvb started at 2022-11-25 19:14:39 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 19:30:25.098: INFO: pod-a42d737b-fb10-4b6c-b071-ba961690b9ce started at 2022-11-25 18:59:38 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container write-pod ready: false, restart count 0 Nov 25 19:30:25.098: INFO: nfs-server started at 2022-11-25 19:15:08 +0000 UTC (0+1 container statuses recorded) Nov 25 19:30:25.098: INFO: Container nfs-server ready: true, restart count 5 Nov 25 19:30:25.302: INFO: Latency metrics for node bootstrap-e2e-minion-group-rvwg [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-6235" for this suite. 11/25/22 19:30:25.302
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sTCP\sservice\s\[Slow\]$'
test/e2e/network/service.go:456 k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:456 +0x14e k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 +0x1670
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 18:58:39.208 Nov 25 18:58:39.208: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 18:58:39.211 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 18:58:39.364 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 18:58:39.479 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to change the type and ports of a TCP service [Slow] test/e2e/network/loadbalancer.go:77 Nov 25 18:58:39.673: INFO: namespace for TCP test: loadbalancers-8000 STEP: creating a TCP service mutability-test with type=ClusterIP in namespace loadbalancers-8000 11/25/22 18:58:39.718 Nov 25 18:58:39.772: INFO: service port TCP: 80 STEP: creating a pod to be part of the TCP service mutability-test 11/25/22 18:58:39.772 Nov 25 18:58:39.816: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 18:58:39.861: INFO: Found all 1 pods Nov 25 18:58:39.861: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [mutability-test-js7zw] Nov 25 18:58:39.861: INFO: Waiting up to 2m0s for pod "mutability-test-js7zw" in namespace "loadbalancers-8000" to be "running and ready" Nov 25 18:58:39.902: INFO: Pod "mutability-test-js7zw": Phase="Pending", Reason="", readiness=false. Elapsed: 40.726381ms Nov 25 18:58:39.902: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-js7zw' on 'bootstrap-e2e-minion-group-rvwg' to be 'Running' but was 'Pending' Nov 25 18:58:41.951: INFO: Pod "mutability-test-js7zw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089296968s Nov 25 18:58:41.951: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-js7zw' on 'bootstrap-e2e-minion-group-rvwg' to be 'Running' but was 'Pending' Nov 25 18:58:43.948: INFO: Pod "mutability-test-js7zw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087003636s Nov 25 18:58:43.948: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-js7zw' on 'bootstrap-e2e-minion-group-rvwg' to be 'Running' but was 'Pending' Nov 25 18:58:45.955: INFO: Pod "mutability-test-js7zw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093364214s Nov 25 18:58:45.955: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-js7zw' on 'bootstrap-e2e-minion-group-rvwg' to be 'Running' but was 'Pending' Nov 25 18:58:47.947: INFO: Pod "mutability-test-js7zw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085971763s Nov 25 18:58:47.947: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-js7zw' on 'bootstrap-e2e-minion-group-rvwg' to be 'Running' but was 'Pending' Nov 25 18:58:49.944: INFO: Pod "mutability-test-js7zw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.082188328s Nov 25 18:58:49.944: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-js7zw' on 'bootstrap-e2e-minion-group-rvwg' to be 'Running' but was 'Pending' Nov 25 18:58:51.948: INFO: Pod "mutability-test-js7zw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.086463281s Nov 25 18:58:51.948: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-js7zw' on 'bootstrap-e2e-minion-group-rvwg' to be 'Running' but was 'Pending' Nov 25 18:58:53.943: INFO: Pod "mutability-test-js7zw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.081985532s Nov 25 18:58:53.943: INFO: Error evaluating pod condition running and ready: want pod 'mutability-test-js7zw' on 'bootstrap-e2e-minion-group-rvwg' to be 'Running' but was 'Pending' Nov 25 18:58:55.944: INFO: Pod "mutability-test-js7zw": Phase="Running", Reason="", readiness=false. Elapsed: 16.082607376s Nov 25 18:58:55.944: INFO: Error evaluating pod condition running and ready: pod 'mutability-test-js7zw' on 'bootstrap-e2e-minion-group-rvwg' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:47 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:47 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 18:58:39 +0000 UTC }] Nov 25 18:58:57.945: INFO: Pod "mutability-test-js7zw": Phase="Running", Reason="", readiness=true. Elapsed: 18.08329303s Nov 25 18:58:57.945: INFO: Pod "mutability-test-js7zw" satisfied condition "running and ready" Nov 25 18:58:57.945: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [mutability-test-js7zw] STEP: changing the TCP service to type=NodePort 11/25/22 18:58:57.945 Nov 25 18:58:58.037: INFO: TCP node port: 30198 STEP: hitting the TCP service's NodePort 11/25/22 18:58:58.037 Nov 25 18:58:58.037: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:58:58.079: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:00.079: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:00.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:02.079: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:02.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:04.079: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:04.118: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:06.080: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:06.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:08.079: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:08.120: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:10.080: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:10.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:12.079: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:12.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:14.079: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:14.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:16.080: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:16.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:18.080: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:18.122: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: no route to host Nov 25 18:59:20.080: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:20.120: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: no route to host Nov 25 18:59:22.079: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:22.120: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: no route to host Nov 25 18:59:24.080: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:24.120: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: no route to host Nov 25 18:59:26.080: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:26.120: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: no route to host Nov 25 18:59:28.080: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:28.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:30.080: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:30.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:32.079: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:32.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:34.079: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:34.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:36.079: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:36.118: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:38.079: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:38.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:40.079: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:40.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:42.080: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:42.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:44.079: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:44.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:46.079: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:46.119: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): Get "http://35.197.110.94:30198/echo?msg=hello": dial tcp 35.197.110.94:30198: connect: connection refused Nov 25 18:59:48.080: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 18:59:48.162: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): success STEP: creating a static load balancer IP 11/25/22 18:59:48.162 Nov 25 18:59:50.046: INFO: Allocated static load balancer IP: 34.127.124.10 STEP: changing the TCP service to type=LoadBalancer 11/25/22 18:59:50.046 STEP: waiting for the TCP service to have a load balancer 11/25/22 18:59:50.2 Nov 25 18:59:50.200: INFO: Waiting up to 15m0s for service "mutability-test" to have a LoadBalancer Nov 25 18:59:52.322: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 18:59:54.322: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 18:59:56.322: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 18:59:58.322: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:00.322: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:02.322: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:04.323: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:06.322: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:08.322: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:10.322: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:12.322: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:14.322: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:16.323: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:18.322: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:20.322: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:00:22.322: INFO: Retrying .... error trying to get Service mutability-test: Get "https://104.198.13.163/api/v1/namespaces/loadbalancers-8000/services/mutability-test": dial tcp 104.198.13.163:443: connect: connection refused Nov 25 19:03:14.355: INFO: TCP load balancer: 34.127.124.10 STEP: demoting the static IP to ephemeral 11/25/22 19:03:14.355 STEP: hitting the TCP service's NodePort 11/25/22 19:03:16.222 Nov 25 19:03:16.223: INFO: Poking "http://35.197.110.94:30198/echo?msg=hello" Nov 25 19:03:16.304: INFO: Poke("http://35.197.110.94:30198/echo?msg=hello"): success STEP: hitting the TCP service's LoadBalancer 11/25/22 19:03:16.304 Nov 25 19:03:16.304: INFO: Poking "http://34.127.124.10:80/echo?msg=hello" Nov 25 19:03:16.385: INFO: Poke("http://34.127.124.10:80/echo?msg=hello"): success STEP: changing the TCP service's NodePort 11/25/22 19:03:16.385 Nov 25 19:03:16.636: INFO: TCP node port: 30199 STEP: hitting the TCP service's new NodePort 11/25/22 19:03:16.636 Nov 25 19:03:16.636: INFO: Poking "http://35.197.110.94:30199/echo?msg=hello" Nov 25 19:03:16.675: INFO: Poke("http://35.197.110.94:30199/echo?msg=hello"): Get "http://35.197.110.94:30199/echo?msg=hello": dial tcp 35.197.110.94:30199: connect: connection refused Nov 25 19:03:18.676: INFO: Poking "http://35.197.110.94:30199/echo?msg=hello" Nov 25 19:03:18.715: INFO: Poke("http://35.197.110.94:30199/echo?msg=hello"): Get "http://35.197.110.94:30199/echo?msg=hello": dial tcp 35.197.110.94:30199: connect: connection refused Nov 25 19:03:20.677: INFO: Poking "http://35.197.110.94:30199/echo?msg=hello" Nov 25 19:03:20.716: INFO: Poke("http://35.197.110.94:30199/echo?msg=hello"): Get "http://35.197.110.94:30199/echo?msg=hello": dial tcp 35.197.110.94:30199: connect: connection refused Nov 25 19:03:22.676: INFO: Poking "http://35.197.110.94:30199/echo?msg=hello" Nov 25 19:03:22.715: INFO: Poke("http://35.197.110.94:30199/echo?msg=hello"): Get "http://35.197.110.94:30199/echo?msg=hello": dial tcp 35.197.110.94:30199: connect: connection refused Nov 25 19:03:24.676: INFO: Poking "http://35.197.110.94:30199/echo?msg=hello" Nov 25 19:03:24.755: INFO: Poke("http://35.197.110.94:30199/echo?msg=hello"): success STEP: checking the old TCP NodePort is closed 11/25/22 19:03:24.756 Nov 25 19:03:24.756: INFO: Poking "http://35.197.110.94:30198/" Nov 25 19:03:24.795: INFO: Poke("http://35.197.110.94:30198/"): Get "http://35.197.110.94:30198/": dial tcp 35.197.110.94:30198: connect: connection refused STEP: hitting the TCP service's LoadBalancer 11/25/22 19:03:24.795 Nov 25 19:03:24.795: INFO: Poking "http://34.127.124.10:80/echo?msg=hello" Nov 25 19:03:24.876: INFO: Poke("http://34.127.124.10:80/echo?msg=hello"): success STEP: changing the TCP service's port 11/25/22 19:03:24.876 Nov 25 19:03:25.006: INFO: service port TCP: 81 STEP: hitting the TCP service's NodePort 11/25/22 19:03:25.006 Nov 25 19:03:25.007: INFO: Poking "http://35.197.110.94:30199/echo?msg=hello" Nov 25 19:03:25.087: INFO: Poke("http://35.197.110.94:30199/echo?msg=hello"): success STEP: hitting the TCP service's LoadBalancer 11/25/22 19:03:25.087 Nov 25 19:03:25.087: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" Nov 25 19:03:35.088: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:03:37.088: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 5m0.417s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 5m0s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's LoadBalancer (Step Runtime: 14.538s) test/e2e/network/loadbalancer.go:243 Spec Goroutine goroutine 409 [select] net/http.(*Transport).getConn(0xc000021400, 0xc004ba3480, {{}, 0x0, {0xc004dacb40, 0x4}, {0xc0040a1bd0, 0x10}, 0x0}) /usr/local/go/src/net/http/transport.go:1376 net/http.(*Transport).roundTrip(0xc000021400, 0xc000d59300) /usr/local/go/src/net/http/transport.go:582 net/http.(*Transport).RoundTrip(0xc000d59300?, 0x7fadc80?) /usr/local/go/src/net/http/roundtrip.go:17 net/http.send(0xc000d59200, {0x7fadc80, 0xc000021400}, {0x74d54e0?, 0x26b3a01?, 0xae40400?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc001ab6ff0, 0xc000d59200, {0x0?, 0x262a61f?, 0xae40400?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc001ab6ff0, 0xc000d59200) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 net/http.(*Client).Get(0x2?, {0xc004dacb40?, 0x9?}) /usr/local/go/src/net/http/client.go:479 k8s.io/kubernetes/test/e2e/framework/network.httpGetNoConnectionPoolTimeout({0xc004dacb40, 0x26}, 0x2540be400) test/e2e/framework/network/utils.go:1065 k8s.io/kubernetes/test/e2e/framework/network.PokeHTTP({0xc004e60c70, 0xd}, 0x51, {0x75ddb6b, 0xf}, 0xc00114fb04?) test/e2e/framework/network/utils.go:998 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes.func1() test/e2e/framework/service/util.go:35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0x5?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004e56c90, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x30?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc00100b680?, 0x76888de?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004e60c70, 0xd}, 0x51, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:244 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:03:47.090: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:03:49.089: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" Nov 25 19:03:59.089: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 5m20.42s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 5m20.003s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's LoadBalancer (Step Runtime: 34.541s) test/e2e/network/loadbalancer.go:243 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004e56c90, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x30?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc00100b680?, 0x76888de?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004e60c70, 0xd}, 0x51, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:244 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:04:01.089: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" Nov 25 19:04:11.089: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:04:13.088: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 5m40.422s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 5m40.005s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's LoadBalancer (Step Runtime: 54.543s) test/e2e/network/loadbalancer.go:243 Spec Goroutine goroutine 409 [select] net/http.(*Transport).getConn(0xc000021540, 0xc004ba3500, {{}, 0x0, {0xc004dacc00, 0x4}, {0xc0040a1c10, 0x10}, 0x0}) /usr/local/go/src/net/http/transport.go:1376 net/http.(*Transport).roundTrip(0xc000021540, 0xc000d59500) /usr/local/go/src/net/http/transport.go:582 net/http.(*Transport).RoundTrip(0xc000d59500?, 0x7fadc80?) /usr/local/go/src/net/http/roundtrip.go:17 net/http.send(0xc000d59400, {0x7fadc80, 0xc000021540}, {0x74d54e0?, 0x26b3a01?, 0xae40400?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc001ab74d0, 0xc000d59400, {0x0?, 0x262a61f?, 0xae40400?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc001ab74d0, 0xc000d59400) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 net/http.(*Client).Get(0x2?, {0xc004dacc00?, 0x9?}) /usr/local/go/src/net/http/client.go:479 k8s.io/kubernetes/test/e2e/framework/network.httpGetNoConnectionPoolTimeout({0xc004dacc00, 0x26}, 0x2540be400) test/e2e/framework/network/utils.go:1065 k8s.io/kubernetes/test/e2e/framework/network.PokeHTTP({0xc004e60c70, 0xd}, 0x51, {0x75ddb6b, 0xf}, 0xc00114fb04?) test/e2e/framework/network/utils.go:998 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes.func1() test/e2e/framework/service/util.go:35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0x5?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004e56c90, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x30?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc00100b680?, 0x76888de?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004e60c70, 0xd}, 0x51, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:244 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:04:23.095: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:04:25.088: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" Nov 25 19:04:35.089: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:04:37.088: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 6m0.424s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 6m0.007s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's LoadBalancer (Step Runtime: 1m14.545s) test/e2e/network/loadbalancer.go:243 Spec Goroutine goroutine 409 [select] net/http.(*Transport).getConn(0xc0003caf00, 0xc004dfe000, {{}, 0x0, {0xc00448e060, 0x4}, {0xc004e60030, 0x10}, 0x0}) /usr/local/go/src/net/http/transport.go:1376 net/http.(*Transport).roundTrip(0xc0003caf00, 0xc00077e300) /usr/local/go/src/net/http/transport.go:582 net/http.(*Transport).RoundTrip(0xc00077e300?, 0x7fadc80?) /usr/local/go/src/net/http/roundtrip.go:17 net/http.send(0xc0000f5b00, {0x7fadc80, 0xc0003caf00}, {0x74d54e0?, 0x26b3a01?, 0xae40400?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc001ece570, 0xc0000f5b00, {0x0?, 0xc00114f480?, 0xae40400?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc001ece570, 0xc0000f5b00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 net/http.(*Client).Get(0x7541120?, {0xc00448e060?, 0x9?}) /usr/local/go/src/net/http/client.go:479 k8s.io/kubernetes/test/e2e/framework/network.httpGetNoConnectionPoolTimeout({0xc00448e060, 0x26}, 0x2540be400) test/e2e/framework/network/utils.go:1065 k8s.io/kubernetes/test/e2e/framework/network.PokeHTTP({0xc004e60c70, 0xd}, 0x51, {0x75ddb6b, 0xf}, 0xc00114fb04?) test/e2e/framework/network/utils.go:998 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes.func1() test/e2e/framework/service/util.go:35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0x5?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004e56c90, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x30?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc00100b680?, 0x76888de?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004e60c70, 0xd}, 0x51, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:244 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:04:47.089: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:04:47.089: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" Nov 25 19:04:57.089: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:04:59.089: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 6m20.427s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 6m20.011s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's LoadBalancer (Step Runtime: 1m34.549s) test/e2e/network/loadbalancer.go:243 Spec Goroutine goroutine 409 [select] net/http.(*Transport).getConn(0xc004dc4b40, 0xc000e26280, {{}, 0x0, {0xc0009c90b0, 0x4}, {0xc004be0130, 0x10}, 0x0}) /usr/local/go/src/net/http/transport.go:1376 net/http.(*Transport).roundTrip(0xc004dc4b40, 0xc000d58500) /usr/local/go/src/net/http/transport.go:582 net/http.(*Transport).RoundTrip(0xc000d58500?, 0x7fadc80?) /usr/local/go/src/net/http/roundtrip.go:17 net/http.send(0xc000d58400, {0x7fadc80, 0xc004dc4b40}, {0x74d54e0?, 0x26b3a01?, 0xae40400?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc003c84900, 0xc000d58400, {0x0?, 0x262a61f?, 0xae40400?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc003c84900, 0xc000d58400) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 net/http.(*Client).Get(0x2?, {0xc0009c90b0?, 0x9?}) /usr/local/go/src/net/http/client.go:479 k8s.io/kubernetes/test/e2e/framework/network.httpGetNoConnectionPoolTimeout({0xc0009c90b0, 0x26}, 0x2540be400) test/e2e/framework/network/utils.go:1065 k8s.io/kubernetes/test/e2e/framework/network.PokeHTTP({0xc004e60c70, 0xd}, 0x51, {0x75ddb6b, 0xf}, 0xc00114fb04?) test/e2e/framework/network/utils.go:998 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes.func1() test/e2e/framework/service/util.go:35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0x5?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004e56c90, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x30?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc00100b680?, 0x76888de?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004e60c70, 0xd}, 0x51, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:244 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:05:09.089: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:05:11.088: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 6m40.43s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 6m40.013s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's LoadBalancer (Step Runtime: 1m54.551s) test/e2e/network/loadbalancer.go:243 Spec Goroutine goroutine 409 [select] net/http.(*Transport).getConn(0xc0003cb040, 0xc004dfe080, {{}, 0x0, {0xc00448e120, 0x4}, {0xc004e600a0, 0x10}, 0x0}) /usr/local/go/src/net/http/transport.go:1376 net/http.(*Transport).roundTrip(0xc0003cb040, 0xc000997900) /usr/local/go/src/net/http/transport.go:582 net/http.(*Transport).RoundTrip(0xc000997900?, 0x7fadc80?) /usr/local/go/src/net/http/roundtrip.go:17 net/http.send(0xc00077fa00, {0x7fadc80, 0xc0003cb040}, {0x74d54e0?, 0x26b3a01?, 0xae40400?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc001eceb10, 0xc00077fa00, {0x0?, 0x262a61f?, 0xae40400?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc001eceb10, 0xc00077fa00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 net/http.(*Client).Get(0x2?, {0xc00448e120?, 0x9?}) /usr/local/go/src/net/http/client.go:479 k8s.io/kubernetes/test/e2e/framework/network.httpGetNoConnectionPoolTimeout({0xc00448e120, 0x26}, 0x2540be400) test/e2e/framework/network/utils.go:1065 k8s.io/kubernetes/test/e2e/framework/network.PokeHTTP({0xc004e60c70, 0xd}, 0x51, {0x75ddb6b, 0xf}, 0xc00114fb04?) test/e2e/framework/network/utils.go:998 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes.func1() test/e2e/framework/service/util.go:35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0x5?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004e56c90, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x30?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc00100b680?, 0x76888de?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004e60c70, 0xd}, 0x51, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:244 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:05:21.089: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:05:23.088: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" Nov 25 19:05:33.089: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:05:33.089: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 7m0.432s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 7m0.015s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's LoadBalancer (Step Runtime: 2m14.553s) test/e2e/network/loadbalancer.go:243 Spec Goroutine goroutine 409 [select] net/http.(*Transport).getConn(0xc003cf2b40, 0xc004dfe200, {{}, 0x0, {0xc00448ecf0, 0x4}, {0xc004e60190, 0x10}, 0x0}) /usr/local/go/src/net/http/transport.go:1376 net/http.(*Transport).roundTrip(0xc003cf2b40, 0xc001439c00) /usr/local/go/src/net/http/transport.go:582 net/http.(*Transport).RoundTrip(0xc001439c00?, 0x7fadc80?) /usr/local/go/src/net/http/roundtrip.go:17 net/http.send(0xc001439600, {0x7fadc80, 0xc003cf2b40}, {0x74d54e0?, 0x26b3a01?, 0xae40400?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc001ecf1d0, 0xc001439600, {0x0?, 0x262a61f?, 0xae40400?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc001ecf1d0, 0xc001439600) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 net/http.(*Client).Get(0x2?, {0xc00448ecf0?, 0x9?}) /usr/local/go/src/net/http/client.go:479 k8s.io/kubernetes/test/e2e/framework/network.httpGetNoConnectionPoolTimeout({0xc00448ecf0, 0x26}, 0x2540be400) test/e2e/framework/network/utils.go:1065 k8s.io/kubernetes/test/e2e/framework/network.PokeHTTP({0xc004e60c70, 0xd}, 0x51, {0x75ddb6b, 0xf}, 0xc00114fb04?) test/e2e/framework/network/utils.go:998 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes.func1() test/e2e/framework/service/util.go:35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0x5?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004e56c90, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x30?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc00100b680?, 0x76888de?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004e60c70, 0xd}, 0x51, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:244 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:05:43.089: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:05:45.089: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" Nov 25 19:05:55.089: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:05:57.088: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 7m20.434s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 7m20.017s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's LoadBalancer (Step Runtime: 2m34.555s) test/e2e/network/loadbalancer.go:243 Spec Goroutine goroutine 409 [select] net/http.(*Transport).getConn(0xc004dc5680, 0xc000e26400, {{}, 0x0, {0xc0009c9c80, 0x4}, {0xc004be0250, 0x10}, 0x0}) /usr/local/go/src/net/http/transport.go:1376 net/http.(*Transport).roundTrip(0xc004dc5680, 0xc000d58900) /usr/local/go/src/net/http/transport.go:582 net/http.(*Transport).RoundTrip(0xc000d58900?, 0x7fadc80?) /usr/local/go/src/net/http/roundtrip.go:17 net/http.send(0xc000d58800, {0x7fadc80, 0xc004dc5680}, {0x74d54e0?, 0x26b3a01?, 0xae40400?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc003c853e0, 0xc000d58800, {0x0?, 0x262a61f?, 0xae40400?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc003c853e0, 0xc000d58800) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 net/http.(*Client).Get(0x2?, {0xc0009c9c80?, 0x9?}) /usr/local/go/src/net/http/client.go:479 k8s.io/kubernetes/test/e2e/framework/network.httpGetNoConnectionPoolTimeout({0xc0009c9c80, 0x26}, 0x2540be400) test/e2e/framework/network/utils.go:1065 k8s.io/kubernetes/test/e2e/framework/network.PokeHTTP({0xc004e60c70, 0xd}, 0x51, {0x75ddb6b, 0xf}, 0xc00114fb04?) test/e2e/framework/network/utils.go:998 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes.func1() test/e2e/framework/service/util.go:35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0x5?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004e56c90, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x30?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc00100b680?, 0x76888de?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004e60c70, 0xd}, 0x51, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:244 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:06:07.088: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:06:07.088: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" Nov 25 19:06:17.089: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:06:19.088: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 7m40.437s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 7m40.02s) test/e2e/network/loadbalancer.go:77 At [By Step] hitting the TCP service's LoadBalancer (Step Runtime: 2m54.558s) test/e2e/network/loadbalancer.go:243 Spec Goroutine goroutine 409 [select] net/http.(*Transport).getConn(0xc000020140, 0xc004dfe400, {{}, 0x0, {0xc004dac540, 0x4}, {0xc004e60320, 0x10}, 0x0}) /usr/local/go/src/net/http/transport.go:1376 net/http.(*Transport).roundTrip(0xc000020140, 0xc001096800) /usr/local/go/src/net/http/transport.go:582 net/http.(*Transport).RoundTrip(0xc001096800?, 0x7fadc80?) /usr/local/go/src/net/http/roundtrip.go:17 net/http.send(0xc001096200, {0x7fadc80, 0xc000020140}, {0x74d54e0?, 0x26b3a01?, 0xae40400?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc001ecf650, 0xc001096200, {0x0?, 0xc00114f480?, 0xae40400?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc001ecf650, 0xc001096200) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 net/http.(*Client).Get(0x2?, {0xc004dac540?, 0x9?}) /usr/local/go/src/net/http/client.go:479 k8s.io/kubernetes/test/e2e/framework/network.httpGetNoConnectionPoolTimeout({0xc004dac540, 0x26}, 0x2540be400) test/e2e/framework/network/utils.go:1065 k8s.io/kubernetes/test/e2e/framework/network.PokeHTTP({0xc004e60c70, 0xd}, 0x51, {0x75ddb6b, 0xf}, 0xc00114fb04?) test/e2e/framework/network/utils.go:998 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes.func1() test/e2e/framework/service/util.go:35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0x5?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc004e56c90, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x30?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc00100b680?, 0x76888de?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTPWithRetriableErrorCodes({0xc004e60c70, 0xd}, 0x51, {0xae73300, 0x0, 0x0}, 0x1?) test/e2e/framework/service/util.go:46 k8s.io/kubernetes/test/e2e/framework/service.TestReachableHTTP(...) test/e2e/framework/service/util.go:29 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:244 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:06:29.089: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): Get "http://34.127.124.10:81/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:06:29.089: INFO: Poking "http://34.127.124.10:81/echo?msg=hello" Nov 25 19:06:29.170: INFO: Poke("http://34.127.124.10:81/echo?msg=hello"): success STEP: Scaling the pods to 0 11/25/22 19:06:29.17 Nov 25 19:06:33.623: INFO: Waiting up to 2m0s for 0 pods to be created Nov 25 19:06:33.668: INFO: Found 1/0 pods - will retry Nov 25 19:06:35.710: INFO: Found 1/0 pods - will retry Nov 25 19:06:37.775: INFO: Found 1/0 pods - will retry ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 8m0.44s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 8m0.023s) test/e2e/network/loadbalancer.go:77 At [By Step] Scaling the pods to 0 (Step Runtime: 10.478s) test/e2e/network/loadbalancer.go:246 Spec Goroutine goroutine 409 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForPodsCreated(0xc004ae8cd0, 0x0) test/e2e/framework/service/jig.go:803 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).Scale(0xc004ae8cd0, 0x0) test/e2e/framework/service/jig.go:773 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:247 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:06:39.818: INFO: Found 1/0 pods - will retry Nov 25 19:06:41.861: INFO: Found 1/0 pods - will retry Nov 25 19:06:43.903: INFO: Found 1/0 pods - will retry Nov 25 19:06:45.947: INFO: Found 1/0 pods - will retry Nov 25 19:06:47.990: INFO: Found 1/0 pods - will retry Nov 25 19:06:50.034: INFO: Found 1/0 pods - will retry Nov 25 19:06:52.076: INFO: Found 1/0 pods - will retry Nov 25 19:06:54.119: INFO: Found 1/0 pods - will retry Nov 25 19:06:56.163: INFO: Found 1/0 pods - will retry Nov 25 19:06:58.205: INFO: Found 1/0 pods - will retry ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 8m20.442s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 8m20.025s) test/e2e/network/loadbalancer.go:77 At [By Step] Scaling the pods to 0 (Step Runtime: 30.48s) test/e2e/network/loadbalancer.go:246 Spec Goroutine goroutine 409 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForPodsCreated(0xc004ae8cd0, 0x0) test/e2e/framework/service/jig.go:803 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).Scale(0xc004ae8cd0, 0x0) test/e2e/framework/service/jig.go:773 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:247 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:07:00.248: INFO: Found 1/0 pods - will retry Nov 25 19:07:02.291: INFO: Found 1/0 pods - will retry Nov 25 19:07:04.333: INFO: Found 1/0 pods - will retry Nov 25 19:07:06.381: INFO: Found 1/0 pods - will retry Nov 25 19:07:08.424: INFO: Found 1/0 pods - will retry Nov 25 19:07:10.466: INFO: Found 1/0 pods - will retry Nov 25 19:07:12.508: INFO: Found 1/0 pods - will retry Nov 25 19:07:14.556: INFO: Found 1/0 pods - will retry Nov 25 19:07:16.601: INFO: Found 1/0 pods - will retry Nov 25 19:07:18.643: INFO: Found 1/0 pods - will retry ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 8m40.444s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 8m40.027s) test/e2e/network/loadbalancer.go:77 At [By Step] Scaling the pods to 0 (Step Runtime: 50.482s) test/e2e/network/loadbalancer.go:246 Spec Goroutine goroutine 409 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForPodsCreated(0xc004ae8cd0, 0x0) test/e2e/framework/service/jig.go:803 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).Scale(0xc004ae8cd0, 0x0) test/e2e/framework/service/jig.go:773 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:247 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:07:20.690: INFO: Found 1/0 pods - will retry Nov 25 19:07:22.733: INFO: Found 1/0 pods - will retry Nov 25 19:07:24.776: INFO: Found 1/0 pods - will retry Nov 25 19:07:26.820: INFO: Found 1/0 pods - will retry Nov 25 19:07:28.863: INFO: Found 1/0 pods - will retry Nov 25 19:07:30.906: INFO: Found 1/0 pods - will retry Nov 25 19:07:32.949: INFO: Found 1/0 pods - will retry Nov 25 19:07:34.992: INFO: Found 1/0 pods - will retry Nov 25 19:07:37.035: INFO: Found 1/0 pods - will retry Nov 25 19:07:39.077: INFO: Found 1/0 pods - will retry ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 9m0.446s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 9m0.029s) test/e2e/network/loadbalancer.go:77 At [By Step] Scaling the pods to 0 (Step Runtime: 1m10.484s) test/e2e/network/loadbalancer.go:246 Spec Goroutine goroutine 409 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForPodsCreated(0xc004ae8cd0, 0x0) test/e2e/framework/service/jig.go:803 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).Scale(0xc004ae8cd0, 0x0) test/e2e/framework/service/jig.go:773 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:247 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:07:41.120: INFO: Found 1/0 pods - will retry Nov 25 19:07:43.164: INFO: Found 1/0 pods - will retry Nov 25 19:07:45.222: INFO: Found 1/0 pods - will retry Nov 25 19:07:47.266: INFO: Found 1/0 pods - will retry Nov 25 19:07:49.309: INFO: Found 1/0 pods - will retry Nov 25 19:07:51.352: INFO: Found 1/0 pods - will retry Nov 25 19:07:53.417: INFO: Found all 0 pods Nov 25 19:07:53.417: INFO: Waiting up to 2m0s for 0 pods to be running and ready: [] Nov 25 19:07:53.417: INFO: Wanted all 0 pods to be running and ready. Result: true. Pods: [] STEP: looking for ICMP REJECT on the TCP service's NodePort 11/25/22 19:07:53.417 Nov 25 19:07:53.417: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:07:53.459: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:07:55.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:07:55.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:07:57.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:07:57.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:07:59.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:07:59.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 9m20.448s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 9m20.031s) test/e2e/network/loadbalancer.go:77 At [By Step] looking for ICMP REJECT on the TCP service's NodePort (Step Runtime: 6.24s) test/e2e/network/loadbalancer.go:250 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0002e3a28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x0?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc0012b0510?, 0x76fcc8a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:455 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:08:01.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:01.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:03.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:03.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:05.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:05.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:07.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:07.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:09.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:09.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:11.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:11.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:13.461: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:13.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:15.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:15.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:17.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:17.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:19.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:19.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 9m40.451s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 9m40.034s) test/e2e/network/loadbalancer.go:77 At [By Step] looking for ICMP REJECT on the TCP service's NodePort (Step Runtime: 26.242s) test/e2e/network/loadbalancer.go:250 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0002e3a28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x0?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc0012b0510?, 0x76fcc8a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:455 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:08:21.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:21.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:23.461: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:23.502: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:25.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:25.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:27.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:27.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:29.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:29.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:31.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:31.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:33.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:33.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:35.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:35.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:37.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:37.513: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:39.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:39.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 10m0.453s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 10m0.036s) test/e2e/network/loadbalancer.go:77 At [By Step] looking for ICMP REJECT on the TCP service's NodePort (Step Runtime: 46.244s) test/e2e/network/loadbalancer.go:250 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0002e3a28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x0?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc0012b0510?, 0x76fcc8a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:455 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:08:41.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:41.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:43.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:43.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:45.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:45.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:47.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:47.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:49.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:49.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:51.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:51.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:53.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:53.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:55.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:55.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:57.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:57.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:08:59.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:08:59.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 10m20.455s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 10m20.038s) test/e2e/network/loadbalancer.go:77 At [By Step] looking for ICMP REJECT on the TCP service's NodePort (Step Runtime: 1m6.246s) test/e2e/network/loadbalancer.go:250 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0002e3a28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x0?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc0012b0510?, 0x76fcc8a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:455 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:09:01.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:01.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:03.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:03.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:05.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:05.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:07.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:07.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:09.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:09.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:11.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:11.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:13.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:13.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:15.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:15.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:17.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:17.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:19.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:19.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 10m40.458s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 10m40.041s) test/e2e/network/loadbalancer.go:77 At [By Step] looking for ICMP REJECT on the TCP service's NodePort (Step Runtime: 1m26.249s) test/e2e/network/loadbalancer.go:250 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0002e3a28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x0?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc0012b0510?, 0x76fcc8a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:455 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:09:21.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:21.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:23.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:23.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:25.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:25.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:27.461: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:27.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:29.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:29.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:31.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:31.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:33.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:33.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:35.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:35.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:37.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:37.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:39.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:39.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 11m0.46s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 11m0.043s) test/e2e/network/loadbalancer.go:77 At [By Step] looking for ICMP REJECT on the TCP service's NodePort (Step Runtime: 1m46.251s) test/e2e/network/loadbalancer.go:250 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0002e3a28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x0?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc0012b0510?, 0x76fcc8a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:455 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:09:41.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:41.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:43.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:43.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:45.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:45.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:47.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:47.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:49.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:49.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:51.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:51.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:53.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:53.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:55.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:55.502: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:57.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:57.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:09:59.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:09:59.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 11m20.462s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 11m20.045s) test/e2e/network/loadbalancer.go:77 At [By Step] looking for ICMP REJECT on the TCP service's NodePort (Step Runtime: 2m6.253s) test/e2e/network/loadbalancer.go:250 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0002e3a28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x0?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc0012b0510?, 0x76fcc8a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:455 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:10:01.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:01.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:03.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:03.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:05.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:05.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:07.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:07.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:09.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:09.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:11.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:11.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:13.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:13.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:15.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:15.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:17.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:17.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:19.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:19.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 11m40.465s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 11m40.048s) test/e2e/network/loadbalancer.go:77 At [By Step] looking for ICMP REJECT on the TCP service's NodePort (Step Runtime: 2m26.256s) test/e2e/network/loadbalancer.go:250 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0002e3a28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x0?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc0012b0510?, 0x76fcc8a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:455 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:10:21.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:21.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:23.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:23.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:25.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:25.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:27.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:27.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:29.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:29.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:31.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:31.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:33.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:33.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:35.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:35.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:37.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:37.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:39.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:39.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 12m0.467s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 12m0.05s) test/e2e/network/loadbalancer.go:77 At [By Step] looking for ICMP REJECT on the TCP service's NodePort (Step Runtime: 2m46.258s) test/e2e/network/loadbalancer.go:250 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0002e3a28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x0?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc0012b0510?, 0x76fcc8a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:455 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:10:41.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:41.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:43.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:43.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:45.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:45.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:47.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:47.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:49.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:49.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:51.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:51.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:53.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:53.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:55.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:55.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:57.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:57.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:10:59.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:10:59.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 12m20.469s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 12m20.052s) test/e2e/network/loadbalancer.go:77 At [By Step] looking for ICMP REJECT on the TCP service's NodePort (Step Runtime: 3m6.26s) test/e2e/network/loadbalancer.go:250 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0002e3a28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x0?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc0012b0510?, 0x76fcc8a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:455 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:11:01.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:01.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:03.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:03.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:05.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:05.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:07.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:07.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:09.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:09.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:11.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:11.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:13.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:13.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:15.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:15.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:17.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:17.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:19.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:19.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 12m40.471s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 12m40.054s) test/e2e/network/loadbalancer.go:77 At [By Step] looking for ICMP REJECT on the TCP service's NodePort (Step Runtime: 3m26.262s) test/e2e/network/loadbalancer.go:250 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0002e3a28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x0?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc0012b0510?, 0x76fcc8a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:455 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:11:21.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:21.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:23.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:23.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:25.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:25.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:27.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:27.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:29.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:29.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:31.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:31.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:33.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:33.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:35.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:35.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:37.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:37.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:39.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:39.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 13m0.473s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 13m0.056s) test/e2e/network/loadbalancer.go:77 At [By Step] looking for ICMP REJECT on the TCP service's NodePort (Step Runtime: 3m46.264s) test/e2e/network/loadbalancer.go:250 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0002e3a28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x0?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc0012b0510?, 0x76fcc8a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:455 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:11:41.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:41.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:43.461: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:43.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:45.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:45.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:47.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:47.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:49.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:49.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:51.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:51.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:53.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:53.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:55.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:55.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:57.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:57.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:11:59.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:11:59.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 13m20.475s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 13m20.058s) test/e2e/network/loadbalancer.go:77 At [By Step] looking for ICMP REJECT on the TCP service's NodePort (Step Runtime: 4m6.266s) test/e2e/network/loadbalancer.go:250 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0002e3a28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x0?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc0012b0510?, 0x76fcc8a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:455 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:12:01.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:01.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:03.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:03.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:05.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:05.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:07.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:07.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:09.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:09.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:11.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:11.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:13.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:13.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:15.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:15.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:17.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:17.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:19.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:19.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 13m40.477s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 13m40.061s) test/e2e/network/loadbalancer.go:77 At [By Step] looking for ICMP REJECT on the TCP service's NodePort (Step Runtime: 4m26.269s) test/e2e/network/loadbalancer.go:250 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0002e3a28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x0?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc0012b0510?, 0x76fcc8a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:455 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:12:21.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:21.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:23.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:23.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:25.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:25.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:27.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:27.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:29.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:29.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:31.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:31.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:33.461: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:33.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:35.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:35.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:37.461: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:37.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:39.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:39.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host ------------------------------ Progress Report for Ginkgo Process #8 Automatically polling progress: [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow] (Spec Runtime: 14m0.48s) test/e2e/network/loadbalancer.go:77 In [It] (Node Runtime: 14m0.063s) test/e2e/network/loadbalancer.go:77 At [By Step] looking for ICMP REJECT on the TCP service's NodePort (Step Runtime: 4m46.271s) test/e2e/network/loadbalancer.go:250 Spec Goroutine goroutine 409 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0002e3a28, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xd0?, 0x2fd9d05?, 0x10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x0?, 0xc00114fc20?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x754e980?, 0xc0012b0510?, 0x76fcc8a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:455 > k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc0049e6a80}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 19:12:41.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:41.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:43.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:43.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:45.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:45.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:47.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:47.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:49.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:49.500: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:51.460: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:51.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:53.461: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:53.501: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:53.501: INFO: Poking "http://35.197.110.94:30199/" Nov 25 19:12:53.541: INFO: Poke("http://35.197.110.94:30199/"): Get "http://35.197.110.94:30199/": dial tcp 35.197.110.94:30199: connect: no route to host Nov 25 19:12:53.542: FAIL: HTTP service 35.197.110.94:30199 not rejected: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.testRejectedHTTP({0xc004aabec0, 0xd}, 0x75f7, 0x0?) test/e2e/network/service.go:456 +0x14e k8s.io/kubernetes/test/e2e/network.glob..func19.3() test/e2e/network/loadbalancer.go:251 +0x1670 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 19:12:53.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 19:12:53.711: INFO: Output of kubectl describe svc: Nov 25 19:12:53.711: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://104.198.13.163 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8000 describe svc --namespace=loadbalancers-8000' Nov 25 19:12:54.050: INFO: stderr: "" Nov 25 19:12:54.050: INFO: stdout: "Name: mutability-test\nNamespace: loadbalancers-8000\nLabels: testid=mutability-test-5421a0d1-54ab-49fe-bcb7-600ffc61347d\nAnnotations: <none>\nSelector: testid=mutability-test-5421a0d1-54ab-49fe-bcb7-600ffc61347d\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.39.223\nIPs: 10.0.39.223\nIP: 34.127.124.10\nLoadBalancer Ingress: 34.127.124.10\nPort: <unset> 81/TCP\nTargetPort: 80/TCP\nNodePort: <unset> 30199/TCP\nEndpoints: <none>\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Type 13m service-controller NodePort -> LoadBalancer\n Normal EnsuringLoadBalancer 7m59s (x2 over 10m) service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 7m16s (x2 over 9m42s) service-controller Ensured load balancer\n Normal EnsuringLoadBalancer 3m31s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 3m27s service-controller Ensured load balancer\n" Nov 25 19:12:54.050: INFO: Name: mutability-test Namespace: loadbalancers-8000 Labels: testid=mutability-test-5421a0d1-54ab-49fe-bcb7-600ffc61347d Annotations: <none> Selector: testid=mutability-test-5421a0d1-54ab-49fe-bcb7-600ffc61347d Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.39.223 IPs: 10.0.39.223 IP: 34.127.124.10 LoadBalancer Ingress: 34.127.124.10 Port: <unset> 81/TCP TargetPort: 80/TCP NodePort: <unset> 30199/TCP Endpoints: <none> Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Type 13m service-controller NodePort -> LoadBalancer Normal EnsuringLoadBalancer 7m59s (x2 over 10m) service-controller Ensuring load balancer Normal EnsuredLoadBalancer 7m16s (x2 over 9m42s) service-controller Ensured load balancer Normal EnsuringLoadBalancer 3m31s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 3m27s service-controller Ensured load balancer [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 19:12:54.05 STEP: Collecting events from namespace "loadbalancers-8000". 11/25/22 19:12:54.05 STEP: Found 14 events. 11/25/22 19:12:54.093 Nov 25 19:12:54.093: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for mutability-test-js7zw: { } Scheduled: Successfully assigned loadbalancers-8000/mutability-test-js7zw to bootstrap-e2e-minion-group-rvwg Nov 25 19:12:54.093: INFO: At 2022-11-25 18:58:39 +0000 UTC - event for mutability-test: {replication-controller } SuccessfulCreate: Created pod: mutability-test-js7zw Nov 25 19:12:54.093: INFO: At 2022-11-25 18:58:44 +0000 UTC - event for mutability-test-js7zw: {kubelet bootstrap-e2e-minion-group-rvwg} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 19:12:54.093: INFO: At 2022-11-25 18:58:44 +0000 UTC - event for mutability-test-js7zw: {kubelet bootstrap-e2e-minion-group-rvwg} Created: Created container netexec Nov 25 19:12:54.093: INFO: At 2022-11-25 18:58:45 +0000 UTC - event for mutability-test-js7zw: {kubelet bootstrap-e2e-minion-group-rvwg} Started: Started container netexec Nov 25 19:12:54.093: INFO: At 2022-11-25 18:58:46 +0000 UTC - event for mutability-test-js7zw: {kubelet bootstrap-e2e-minion-group-rvwg} Killing: Stopping container netexec Nov 25 19:12:54.093: INFO: At 2022-11-25 18:58:47 +0000 UTC - event for mutability-test-js7zw: {kubelet bootstrap-e2e-minion-group-rvwg} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:12:54.093: INFO: At 2022-11-25 18:58:51 +0000 UTC - event for mutability-test-js7zw: {kubelet bootstrap-e2e-minion-group-rvwg} BackOff: Back-off restarting failed container netexec in pod mutability-test-js7zw_loadbalancers-8000(95a79daa-9afe-402b-9c90-f4d5c308d0be) Nov 25 19:12:54.093: INFO: At 2022-11-25 18:58:51 +0000 UTC - event for mutability-test-js7zw: {kubelet bootstrap-e2e-minion-group-rvwg} Unhealthy: Readiness probe failed: Get "http://10.64.3.26:80/hostName": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 25 19:12:54.093: INFO: At 2022-11-25 18:59:50 +0000 UTC - event for mutability-test: {service-controller } Type: NodePort -> LoadBalancer Nov 25 19:12:54.093: INFO: At 2022-11-25 19:02:47 +0000 UTC - event for mutability-test: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 25 19:12:54.093: INFO: At 2022-11-25 19:03:12 +0000 UTC - event for mutability-test: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 25 19:12:54.093: INFO: At 2022-11-25 19:09:23 +0000 UTC - event for mutability-test: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 25 19:12:54.093: INFO: At 2022-11-25 19:09:27 +0000 UTC - event for mutability-test: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 25 19:12:54.133: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 19:12:54.133: INFO: Nov 25 19:12:54.182: INFO: Logging node info for node bootstrap-e2e-master Nov 25 19:12:54.224: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 063993ed-280a-4252-9455-5251e2ae7f68 10305 0 2022-11-25 18:55:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 18:55:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 18:55:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 19:11:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:mes