go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sServers\swith\ssupport\sfor\sAPI\schunking\sshould\ssupport\scontinue\slisting\sfrom\sthe\slast\skey\sif\sthe\soriginal\sversion\shas\sbeen\scompacted\saway\,\sthough\sthe\slist\sis\sinconsistent\s\[Slow\]$'
test/e2e/apimachinery/chunking.go:177 k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:177 +0x7fc There were additional failures detected after the initial failure: [FAILED] Nov 26 21:27:58.334: failed to list events in namespace "chunking-2842": Get "https://35.233.174.213/api/v1/namespaces/chunking-2842/events": dial tcp 35.233.174.213:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 21:27:58.374: Couldn't delete ns: "chunking-2842": Delete "https://35.233.174.213/api/v1/namespaces/chunking-2842": dial tcp 35.233.174.213:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.174.213/api/v1/namespaces/chunking-2842", Err:(*net.OpError)(0xc003cc6960)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-api-machinery] Servers with support for API chunking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 21:24:00.123 Nov 26 21:24:00.123: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename chunking 11/26/22 21:24:00.125 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 21:24:00.28 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 21:24:00.369 [BeforeEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-api-machinery] Servers with support for API chunking test/e2e/apimachinery/chunking.go:51 STEP: creating a large number of resources 11/26/22 21:24:00.45 [It] should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow] test/e2e/apimachinery/chunking.go:126 STEP: retrieving the first page 11/26/22 21:24:18.032 Nov 26 21:24:18.174: INFO: Retrieved 40/40 results with rv 14695 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTQ2OTUsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 STEP: retrieving the second page until the token expires 11/26/22 21:24:18.174 Nov 26 21:24:38.261: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTQ2OTUsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Nov 26 21:24:58.239: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTQ2OTUsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Nov 26 21:25:18.237: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTQ2OTUsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Nov 26 21:25:38.252: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTQ2OTUsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Nov 26 21:25:58.279: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTQ2OTUsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Nov 26 21:26:18.365: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTQ2OTUsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Nov 26 21:26:38.250: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTQ2OTUsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Nov 26 21:26:58.257: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTQ2OTUsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Nov 26 21:27:18.233: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTQ2OTUsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Nov 26 21:27:38.254: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTQ2OTUsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet STEP: retrieving the second page again with the token received with the error message 11/26/22 21:27:58.215 Nov 26 21:27:58.254: INFO: Unexpected error: failed to list pod templates in namespace: chunking-2842, given inconsistent continue token and limit: 40: <*url.Error | 0xc003c6c000>: { Op: "Get", URL: "https://35.233.174.213/api/v1/namespaces/chunking-2842/podtemplates?limit=40", Err: <*net.OpError | 0xc0014c8280>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003e7c510>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 174, 213], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00111a000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 21:27:58.255: FAIL: failed to list pod templates in namespace: chunking-2842, given inconsistent continue token and limit: 40: Get "https://35.233.174.213/api/v1/namespaces/chunking-2842/podtemplates?limit=40": dial tcp 35.233.174.213:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:177 +0x7fc [AfterEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/node/init/init.go:32 Nov 26 21:27:58.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 21:27:58.294 STEP: Collecting events from namespace "chunking-2842". 11/26/22 21:27:58.294 Nov 26 21:27:58.334: INFO: Unexpected error: failed to list events in namespace "chunking-2842": <*url.Error | 0xc003e7c540>: { Op: "Get", URL: "https://35.233.174.213/api/v1/namespaces/chunking-2842/events", Err: <*net.OpError | 0xc0036ee6e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00356eb40>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 174, 213], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000f762e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 21:27:58.334: FAIL: failed to list events in namespace "chunking-2842": Get "https://35.233.174.213/api/v1/namespaces/chunking-2842/events": dial tcp 35.233.174.213:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc001b305c0, {0xc0031f5460, 0xd}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc001e021a0}, {0xc0031f5460, 0xd}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001b30650?, {0xc0031f5460?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc001102780) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00184d3f0?, 0xc00360cfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc003248f28?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00184d3f0?, 0x29449fc?}, {0xae73300?, 0xc00360cf80?, 0x2a6d786?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking tear down framework | framework.go:193 STEP: Destroying namespace "chunking-2842" for this suite. 11/26/22 21:27:58.334 Nov 26 21:27:58.374: FAIL: Couldn't delete ns: "chunking-2842": Delete "https://35.233.174.213/api/v1/namespaces/chunking-2842": dial tcp 35.233.174.213:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.174.213/api/v1/namespaces/chunking-2842", Err:(*net.OpError)(0xc003cc6960)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001102780) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00184d330?, 0xc0039e5fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00184d330?, 0x0?}, {0xae73300?, 0x5?, 0xc003cbd020?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sBurst\sscaling\sshould\srun\sto\scompletion\seven\swith\sunhealthy\spods\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/statefulset/rest.go:69 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc003f24b60}, 0xc001745900) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc0001d0380?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f74570, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002e7fe48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc003f24b60?, 0xc002e7fe88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003f24b60}, 0x3, 0x3, 0xc001745900) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 +0x3d0 There were additional failures detected after the initial failure: [FAILED] Nov 26 20:57:15.373: Get "https://35.233.174.213/apis/apps/v1/namespaces/statefulset-4883/statefulsets": stream error: stream ID 33; INTERNAL_ERROR; received from peer In [AfterEach] at: test/e2e/framework/statefulset/rest.go:76from junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 20:43:11.314 Nov 26 20:43:11.315: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/26/22 20:43:11.316 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 20:43:11.443 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 20:43:11.524 [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:98 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:113 STEP: Creating service test in namespace statefulset-4883 11/26/22 20:43:11.646 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] test/e2e/apps/statefulset.go:697 STEP: Creating stateful set ss in namespace statefulset-4883 11/26/22 20:43:11.705 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4883 11/26/22 20:43:11.751 Nov 26 20:43:11.803: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 20:43:21.845: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 20:43:31.856: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 20:43:41.873: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 20:43:51.847: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 20:44:01.883: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 20:44:11.848: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 20:44:21.846: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod 11/26/22 20:44:21.846 Nov 26 20:44:21.889: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 26 20:44:22.451: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 26 20:44:22.451: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 26 20:44:22.451: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 26 20:44:22.493: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 26 20:44:32.537: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 26 20:44:32.537: INFO: Waiting for statefulset status.replicas updated to 0 Nov 26 20:44:32.709: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 20:44:32.709: INFO: ss-0 bootstrap-e2e-minion-group-b1s2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:44:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:44:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:11 +0000 UTC }] Nov 26 20:44:32.709: INFO: Nov 26 20:44:32.709: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 26 20:44:33.809: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 20:44:33.809: INFO: ss-0 bootstrap-e2e-minion-group-b1s2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:44:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:44:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:11 +0000 UTC }] Nov 26 20:44:33.809: INFO: Nov 26 20:44:33.809: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 26 20:44:34.970: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.856978722s Nov 26 20:44:36.016: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.695409771s Nov 26 20:44:37.066: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.65013155s Nov 26 20:44:38.109: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.600108293s Nov 26 20:44:39.232: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.557142095s Nov 26 20:44:40.337: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.432830968s Nov 26 20:45:08.344: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.327614191s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 11/26/22 20:45:09.345 Nov 26 20:45:09.389: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:45:09.731: INFO: rc: 1 Nov 26 20:45:09.731: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 26 20:45:19.732: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:45:20.075: INFO: rc: 1 Nov 26 20:45:20.075: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 26 20:45:30.076: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:45:30.415: INFO: rc: 1 Nov 26 20:45:30.415: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 26 20:45:40.415: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:45:40.764: INFO: rc: 1 Nov 26 20:45:40.765: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 26 20:45:50.765: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:45:51.147: INFO: rc: 1 Nov 26 20:45:51.147: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 26 20:46:01.147: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:46:44.400: INFO: rc: 1 Nov 26 20:46:44.400: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 26 20:46:54.401: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:46:54.745: INFO: rc: 1 Nov 26 20:46:54.745: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 26 20:47:04.745: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:47:05.087: INFO: rc: 1 Nov 26 20:47:05.087: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 26 20:47:15.087: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:47:15.474: INFO: rc: 1 Nov 26 20:47:15.474: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 26 20:47:25.475: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:47:25.939: INFO: rc: 1 Nov 26 20:47:25.939: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 26 20:47:35.939: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:47:36.429: INFO: rc: 1 Nov 26 20:47:36.429: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 26 20:47:46.429: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:47:46.969: INFO: rc: 1 Nov 26 20:47:46.969: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 26 20:47:56.970: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 5m0.391s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 5m0.001s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 3m2.361s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000c758c0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0012103e0?, 0x4?}, {0xc002e7f908?, 0x29?, 0xc002e7f8c8?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0012103e0, 0x10}, {0xc0012103cc, 0x4}, {0xc0035bccc0, 0x38}, 0xc000dedc30?, 0x45d964b800) test/e2e/framework/pod/output/output.go:105 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc003f24b60?}, 0xc002e7fe88?, {0xc0035bccc0, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc003f24b60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:718 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 5m20.394s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 5m20.003s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 3m22.363s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000c758c0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0012103e0?, 0x4?}, {0xc002e7f908?, 0x29?, 0xc002e7f8c8?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0012103e0, 0x10}, {0xc0012103cc, 0x4}, {0xc0035bccc0, 0x38}, 0xc000dedc30?, 0x45d964b800) test/e2e/framework/pod/output/output.go:105 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc003f24b60?}, 0xc002e7fe88?, {0xc0035bccc0, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc003f24b60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:718 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 5m40.395s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 5m40.005s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 3m42.365s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000c758c0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc0012103e0?, 0x4?}, {0xc002e7f908?, 0x29?, 0xc002e7f8c8?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0012103e0, 0x10}, {0xc0012103cc, 0x4}, {0xc0035bccc0, 0x38}, 0xc000dedc30?, 0x45d964b800) test/e2e/framework/pod/output/output.go:105 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc003f24b60?}, 0xc002e7fe88?, {0xc0035bccc0, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc003f24b60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:718 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:48:57.167: INFO: rc: 1 Nov 26 20:48:57.167: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Unable to connect to the server: stream error: stream ID 1; INTERNAL_ERROR; received from peer error: exit status 1 Nov 26 20:49:07.168: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:49:07.515: INFO: rc: 1 Nov 26 20:49:07.515: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 6m0.398s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 6m0.007s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 4m2.367s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0012103e0, 0x10}, {0xc0012103cc, 0x4}, {0xc0035bccc0, 0x38}, 0xc000dedc30?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc003f24b60?}, 0xc002e7fe88?, {0xc0035bccc0, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc003f24b60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:718 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:49:17.517: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:49:17.884: INFO: rc: 1 Nov 26 20:49:17.884: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 26 20:49:27.885: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:49:28.266: INFO: rc: 1 Nov 26 20:49:28.266: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 6m20.399s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 6m20.009s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 4m22.369s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0012103e0, 0x10}, {0xc0012103cc, 0x4}, {0xc0035bccc0, 0x38}, 0xc000dedc30?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc003f24b60?}, 0xc002e7fe88?, {0xc0035bccc0, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc003f24b60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:718 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:49:38.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:49:38.794: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Nov 26 20:49:38.794: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 26 20:49:38.794: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 26 20:49:38.794: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:49:39.134: INFO: rc: 1 Nov 26 20:49:39.134: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 26 20:49:49.135: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 6m40.401s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 6m40.011s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 4m42.371s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc001f52000?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc001210620?, 0x4?}, {0xc002e7f908?, 0x29?, 0xc002e7f8c8?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc001210620, 0x10}, {0xc0012105cc, 0x4}, {0xc0035bccc0, 0x38}, 0x3?, 0x45d964b800) test/e2e/framework/pod/output/output.go:105 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc003f24b60?}, 0xc002e7fe88?, {0xc0035bccc0, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc003f24b60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:718 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 7m0.403s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 7m0.013s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 5m2.373s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc001f52000?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc001210620?, 0x4?}, {0xc002e7f908?, 0x29?, 0xc002e7f8c8?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc001210620, 0x10}, {0xc0012105cc, 0x4}, {0xc0035bccc0, 0x38}, 0x3?, 0x45d964b800) test/e2e/framework/pod/output/output.go:105 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc003f24b60?}, 0xc002e7fe88?, {0xc0035bccc0, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc003f24b60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:718 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:50:19.476: INFO: rc: 1 Nov 26 20:50:19.477: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server: error dialing backend: context deadline exceeded: connection error: desc = "transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory" error: exit status 1 Nov 26 20:50:29.477: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 7m20.405s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 7m20.015s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 5m22.375s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc001f52580?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc001210620?, 0x4?}, {0xc002e7f908?, 0x29?, 0xc002e7f8c8?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc001210620, 0x10}, {0xc0012105cc, 0x4}, {0xc0035bccc0, 0x38}, 0x3?, 0x45d964b800) test/e2e/framework/pod/output/output.go:105 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc003f24b60?}, 0xc002e7fe88?, {0xc0035bccc0, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc003f24b60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:718 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 7m40.408s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 7m40.017s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 5m42.377s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc001f52580?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc001210620?, 0x4?}, {0xc002e7f908?, 0x29?, 0xc002e7f8c8?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc001210620, 0x10}, {0xc0012105cc, 0x4}, {0xc0035bccc0, 0x38}, 0x3?, 0x45d964b800) test/e2e/framework/pod/output/output.go:105 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc003f24b60?}, 0xc002e7fe88?, {0xc0035bccc0, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc003f24b60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:718 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 8m0.409s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 8m0.019s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 6m2.379s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc001f52580?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc001210620?, 0x4?}, {0xc002e7f908?, 0x29?, 0xc002e7f8c8?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc001210620, 0x10}, {0xc0012105cc, 0x4}, {0xc0035bccc0, 0x38}, 0x3?, 0x45d964b800) test/e2e/framework/pod/output/output.go:105 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc003f24b60?}, 0xc002e7fe88?, {0xc0035bccc0, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc003f24b60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:718 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:51:29.687: INFO: rc: 1 Nov 26 20:51:29.687: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Unable to connect to the server: stream error: stream ID 1; INTERNAL_ERROR; received from peer error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 8m20.411s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 8m20.021s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 6m22.381s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc001210620, 0x10}, {0xc0012105cc, 0x4}, {0xc0035bccc0, 0x38}, 0x3?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc003f24b60?}, 0xc002e7fe88?, {0xc0035bccc0, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc003f24b60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:718 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:51:39.688: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 8m40.413s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 8m40.023s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 6m42.383s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc001f52000?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc001210620?, 0x4?}, {0xc002e7f908?, 0x29?, 0xc002e7f8c8?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc001210620, 0x10}, {0xc0012105cc, 0x4}, {0xc0035bccc0, 0x38}, 0x3?, 0x45d964b800) test/e2e/framework/pod/output/output.go:105 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc003f24b60?}, 0xc002e7fe88?, {0xc0035bccc0, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc003f24b60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:718 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:51:53.807: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Nov 26 20:51:53.807: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 26 20:51:53.807: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 26 20:51:53.807: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4883 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 20:51:55.051: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Nov 26 20:51:55.051: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 26 20:51:55.051: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 26 20:51:55.244: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 20:52:05.286: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 9m0.416s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 9m0.025s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 7m2.385s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f74570, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002e7fe48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc003f24b60?, 0xc002e7fe88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003f24b60}, 0x3, 0x3, 0xc001745900) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:52:15.288: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 20:52:25.287: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 9m20.417s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 9m20.027s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 7m22.387s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f74570, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002e7fe48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc003f24b60?, 0xc002e7fe88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003f24b60}, 0x3, 0x3, 0xc001745900) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:52:35.287: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 20:52:45.287: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 9m40.419s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 9m40.029s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 7m42.389s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f74570, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002e7fe48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc003f24b60?, 0xc002e7fe88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003f24b60}, 0x3, 0x3, 0xc001745900) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:52:55.286: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 20:53:05.291: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 10m0.421s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 10m0.03s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 8m2.39s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f74570, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002e7fe48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc003f24b60?, 0xc002e7fe88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003f24b60}, 0x3, 0x3, 0xc001745900) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:53:15.287: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 20:53:25.287: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 10m20.423s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 10m20.033s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 8m22.393s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f74570, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002e7fe48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc003f24b60?, 0xc002e7fe88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003f24b60}, 0x3, 0x3, 0xc001745900) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:53:35.294: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 20:53:45.310: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 10m40.425s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 10m40.035s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 8m42.395s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f74570, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002e7fe48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc003f24b60?, 0xc002e7fe88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003f24b60}, 0x3, 0x3, 0xc001745900) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:53:55.342: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 26 20:54:05.376: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 11m0.429s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 11m0.038s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 9m2.398s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f74570, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002e7fe48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc003f24b60?, 0xc002e7fe88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003f24b60}, 0x3, 0x3, 0xc001745900) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 11m20.432s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 11m20.042s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 9m22.402s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000eaa180, 0xc001936b00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc001dead00, 0xc001936b00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00175db80?}, 0xc001936b00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00175db80, 0xc001936b00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc0020a7920?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc000fab7a0, 0xc001936a00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc001de8ee0, 0xc001936900) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc001936900, {0x7fad100, 0xc001de8ee0}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc000fab7d0, 0xc001936900, {0x7f74922cf108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc000fab7d0, 0xc001936900) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc001936700, {0x7fe0bc8, 0xc0000820e0}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc001936700, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).List(0xc0008f7ee0, {0x7fe0bc8, 0xc0000820e0}, {{{0x0, 0x0}, {0x0, 0x0}}, {0xc000f40bd0, 0x10}, {0x0, ...}, ...}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:99 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc003f24b60}, 0xc001745900) test/e2e/framework/statefulset/rest.go:68 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc0001d0380?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f74570, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002e7fe48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc003f24b60?, 0xc002e7fe88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003f24b60}, 0x3, 0x3, 0xc001745900) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 11m40.435s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 11m40.045s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 9m42.405s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000eaa180, 0xc001936b00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc001dead00, 0xc001936b00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00175db80?}, 0xc001936b00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00175db80, 0xc001936b00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc0020a7920?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc000fab7a0, 0xc001936a00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc001de8ee0, 0xc001936900) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc001936900, {0x7fad100, 0xc001de8ee0}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc000fab7d0, 0xc001936900, {0x7f74922cf108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc000fab7d0, 0xc001936900) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc001936700, {0x7fe0bc8, 0xc0000820e0}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc001936700, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).List(0xc0008f7ee0, {0x7fe0bc8, 0xc0000820e0}, {{{0x0, 0x0}, {0x0, 0x0}}, {0xc000f40bd0, 0x10}, {0x0, ...}, ...}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:99 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc003f24b60}, 0xc001745900) test/e2e/framework/statefulset/rest.go:68 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc0001d0380?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f74570, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002e7fe48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc003f24b60?, 0xc002e7fe88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003f24b60}, 0x3, 0x3, 0xc001745900) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #7 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 12m0.442s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 12m0.051s) test/e2e/apps/statefulset.go:697 At [By Step] Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4883 (Step Runtime: 10m2.411s) test/e2e/apps/statefulset.go:717 Spec Goroutine goroutine 705 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000eaa180, 0xc001936b00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc001dead00, 0xc001936b00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00175db80?}, 0xc001936b00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00175db80, 0xc001936b00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc0020a7920?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc000fab7a0, 0xc001936a00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc001de8ee0, 0xc001936900) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc001936900, {0x7fad100, 0xc001de8ee0}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc000fab7d0, 0xc001936900, {0x7f74922cf108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc000fab7d0, 0xc001936900) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc001936700, {0x7fe0bc8, 0xc0000820e0}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc001936700, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).List(0xc0008f7ee0, {0x7fe0bc8, 0xc0000820e0}, {{{0x0, 0x0}, {0x0, 0x0}}, {0xc000f40bd0, 0x10}, {0x0, ...}, ...}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:99 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc003f24b60}, 0xc001745900) test/e2e/framework/statefulset/rest.go:68 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc0001d0380?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f74570, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002e7fe48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc003f24b60?, 0xc002e7fe88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003f24b60}, 0x3, 0x3, 0xc001745900) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:55:15.287: INFO: Unexpected error: <*url.Error | 0xc001dd4300>: { Op: "Get", URL: "https://35.233.174.213/api/v1/namespaces/statefulset-4883/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar", Err: <http2.StreamError>{ StreamID: 29, Code: 2, Cause: <*errors.errorString | 0xc00017d570>{ s: "received from peer", }, }, } Nov 26 20:55:15.287: FAIL: Get "https://35.233.174.213/api/v1/namespaces/statefulset-4883/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar": stream error: stream ID 29; INTERNAL_ERROR; received from peer Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc003f24b60}, 0xc001745900) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc0001d0380?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f74570, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002e7fe48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc003f24b60?, 0xc002e7fe88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003f24b60}, 0x3, 0x3, 0xc001745900) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 +0x3d0 E1126 20:55:15.288102 8131 runtime.go:79] Observed a panic: types.GinkgoError{Heading:"Your Test Panicked", Message:"When you, or your assertion library, calls Ginkgo's Fail(),\nGinkgo panics to prevent subsequent assertions from running.\n\nNormally Ginkgo rescues this panic so you shouldn't see it.\n\nHowever, if you make an assertion in a goroutine, Ginkgo can't capture the panic.\nTo circumvent this, you should call\n\n\tdefer GinkgoRecover()\n\nat the top of the goroutine that caused this panic.\n\nAlternatively, you may have made an assertion outside of a Ginkgo\nleaf node (e.g. in a container node or some out-of-band function) - please move your assertion to\nan appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...).", DocLink:"mental-model-how-ginkgo-handles-failure", CodeLocation:types.CodeLocation{FileName:"test/e2e/framework/statefulset/rest.go", LineNumber:69, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc003f24b60}, 0xc001745900)\n\ttest/e2e/framework/statefulset/rest.go:69 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1()\n\ttest/e2e/framework/statefulset/wait.go:37 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc0001d0380?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f74570, 0x2fdb16a?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002e7fe48?, 0x262a967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc003f24b60?, 0xc002e7fe88?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003f24b60}, 0x3, 0x3, 0xc001745900)\n\ttest/e2e/framework/statefulset/wait.go:35 +0xbd\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...)\n\ttest/e2e/framework/statefulset/wait.go:80\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.11()\n\ttest/e2e/apps/statefulset.go:719 +0x3d0", CustomMessage:""}} (�[1m�[38;5;9mYour Test Panicked�[0m �[38;5;243mtest/e2e/framework/statefulset/rest.go:69�[0m When you, or your assertion library, calls Ginkgo's Fail(), Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. However, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. Alternatively, you may have made an assertion outside of a Ginkgo leaf node (e.g. in a container node or some out-of-band function) - please move your assertion to an appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...). �[1mLearn more at:�[0m �[38;5;14m�[4mhttp://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure�[0m ) goroutine 705 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x70eb7e0?, 0xc000626850}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000626850?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x70eb7e0, 0xc000626850}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc000e460c0, 0xbe}, {0xc0001c55a8?, 0x75b521a?, 0xc0001c55c8?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0001468f0, 0xa9}, {0xc0001c5640?, 0xc0001468f0?, 0xc0001c5668?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fadf60, 0xc001dd4300}, {0x0?, 0xc000f40bd0?, 0x10?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc003f24b60}, 0xc001745900) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc0001d0380?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc001f74570, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xf8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x1?, 0xc002e7fe48?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x801de88?, 0xc003f24b60?, 0xc002e7fe88?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003f24b60}, 0x3, 0x3, 0xc001745900) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:719 +0x3d0 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc001ea4c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 +0x1b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 +0x98 created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 +0xe3d [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Nov 26 20:56:15.331: INFO: Deleting all statefulset in ns statefulset-4883 Nov 26 20:57:15.373: INFO: Unexpected error: <*url.Error | 0xc001dd4300>: { Op: "Get", URL: "https://35.233.174.213/apis/apps/v1/namespaces/statefulset-4883/statefulsets", Err: <http2.StreamError>{ StreamID: 33, Code: 2, Cause: <*errors.errorString | 0xc00017d570>{ s: "received from peer", }, }, } Nov 26 20:57:15.373: FAIL: Get "https://35.233.174.213/apis/apps/v1/namespaces/statefulset-4883/statefulsets": stream error: stream ID 33; INTERNAL_ERROR; received from peer Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x801de88, 0xc003f24b60}, {0xc0029c0060, 0x10}) test/e2e/framework/statefulset/rest.go:76 +0x113 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:129 +0x1b2 [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 26 20:57:15.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 20:57:15.456 STEP: Collecting events from namespace "statefulset-4883". 11/26/22 20:57:15.456 STEP: Found 41 events. 11/26/22 20:57:15.502 Nov 26 20:57:15.502: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss-0: { } Scheduled: Successfully assigned statefulset-4883/ss-0 to bootstrap-e2e-minion-group-b1s2 Nov 26 20:57:15.502: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss-1: { } Scheduled: Successfully assigned statefulset-4883/ss-1 to bootstrap-e2e-minion-group-6k9m Nov 26 20:57:15.502: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss-2: { } Scheduled: Successfully assigned statefulset-4883/ss-2 to bootstrap-e2e-minion-group-01xg Nov 26 20:57:15.502: INFO: At 2022-11-26 20:43:11 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Nov 26 20:57:15.502: INFO: At 2022-11-26 20:43:13 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-b1s2} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Nov 26 20:57:15.502: INFO: At 2022-11-26 20:43:18 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-b1s2} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 5.28512497s (5.285132882s including waiting) Nov 26 20:57:15.502: INFO: At 2022-11-26 20:43:18 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-b1s2} Created: Created container webserver Nov 26 20:57:15.502: INFO: At 2022-11-26 20:43:18 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-b1s2} Started: Started container webserver Nov 26 20:57:15.502: INFO: At 2022-11-26 20:43:21 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-b1s2} Killing: Stopping container webserver Nov 26 20:57:15.502: INFO: At 2022-11-26 20:43:22 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-b1s2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 20:57:15.502: INFO: At 2022-11-26 20:43:22 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-b1s2} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Nov 26 20:57:15.502: INFO: At 2022-11-26 20:43:22 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-b1s2} Unhealthy: Readiness probe failed: Get "http://10.64.1.40:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 20:57:15.502: INFO: At 2022-11-26 20:43:24 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-b1s2} Unhealthy: Readiness probe failed: Get "http://10.64.1.50:80/index.html": dial tcp 10.64.1.50:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Nov 26 20:57:15.502: INFO: At 2022-11-26 20:43:25 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-b1s2} BackOff: Back-off restarting failed container webserver in pod ss-0_statefulset-4883(4257829c-9f06-4fcb-8fb5-bea6971cbb4c) Nov 26 20:57:15.502: INFO: At 2022-11-26 20:43:40 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-b1s2} Unhealthy: Readiness probe failed: Get "http://10.64.1.51:80/index.html": read tcp 10.64.1.1:57612->10.64.1.51:80: read: connection reset by peer Nov 26 20:57:15.502: INFO: At 2022-11-26 20:43:41 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-b1s2} Unhealthy: Readiness probe failed: Get "http://10.64.1.51:80/index.html": dial tcp 10.64.1.51:80: connect: connection refused Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:34 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-2 in StatefulSet ss successful Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:34 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-1 in StatefulSet ss successful Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:35 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6k9m} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-dvkbf" : failed to sync configmap cache: timed out waiting for the condition Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:35 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-01xg} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:36 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6k9m} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:38 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-01xg} Started: Started container webserver Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:38 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-01xg} Created: Created container webserver Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:38 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-01xg} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 2.868815508s (2.868823886s including waiting) Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:39 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-01xg} Killing: Stopping container webserver Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:40 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6k9m} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 3.26466595s (3.264679016s including waiting) Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:40 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6k9m} Created: Created container webserver Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:40 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6k9m} Started: Started container webserver Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:40 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-01xg} Unhealthy: Readiness probe failed: Get "http://10.64.0.50:80/index.html": dial tcp 10.64.0.50:80: connect: connection refused Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:41 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-01xg} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:42 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-01xg} Unhealthy: Readiness probe failed: Get "http://10.64.0.50:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 20:57:15.502: INFO: At 2022-11-26 20:44:42 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-01xg} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Nov 26 20:57:15.502: INFO: At 2022-11-26 20:46:09 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6k9m} Killing: Stopping container webserver Nov 26 20:57:15.502: INFO: At 2022-11-26 20:46:10 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6k9m} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 20:57:15.502: INFO: At 2022-11-26 20:46:10 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6k9m} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Nov 26 20:57:15.502: INFO: At 2022-11-26 20:46:12 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6k9m} Unhealthy: Readiness probe failed: Get "http://10.64.2.90:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 20:57:15.502: INFO: At 2022-11-26 20:46:12 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6k9m} BackOff: Back-off restarting failed container webserver in pod ss-1_statefulset-4883(5b3c453e-896c-4a3b-9b54-0bb8217b476f) Nov 26 20:57:15.502: INFO: At 2022-11-26 20:47:37 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6k9m} Unhealthy: Readiness probe failed: Get "http://10.64.2.91:80/index.html": dial tcp 10.64.2.91:80: i/o timeout (Client.Timeout exceeded while awaiting headers) Nov 26 20:57:15.502: INFO: At 2022-11-26 20:48:46 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-01xg} BackOff: Back-off restarting failed container webserver in pod ss-2_statefulset-4883(cc6187a2-2ef3-40f8-925e-48e41b6015bd) Nov 26 20:57:15.502: INFO: At 2022-11-26 20:48:46 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-01xg} Unhealthy: Readiness probe failed: Get "http://10.64.0.53:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 20:57:15.502: INFO: At 2022-11-26 20:49:04 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-01xg} Unhealthy: Readiness probe failed: Get "http://10.64.0.90:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 20:57:15.545: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 20:57:15.545: INFO: ss-0 bootstrap-e2e-minion-group-b1s2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:56:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:56:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:11 +0000 UTC }] Nov 26 20:57:15.545: INFO: ss-1 bootstrap-e2e-minion-group-6k9m Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:44:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:55:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:55:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:44:34 +0000 UTC }] Nov 26 20:57:15.545: INFO: ss-2 bootstrap-e2e-minion-group-01xg Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:44:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:54:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:54:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:44:34 +0000 UTC }] Nov 26 20:57:15.545: INFO: Nov 26 20:57:15.592: INFO: Unable to fetch statefulset-4883/ss-0/webserver logs: an error on the server ("unknown") has prevented the request from succeeding (get pods ss-0) Nov 26 20:57:15.640: INFO: Unable to fetch statefulset-4883/ss-1/webserver logs: an error on the server ("unknown") has prevented the request from succeeding (get pods ss-1) Nov 26 20:57:15.684: INFO: Unable to fetch statefulset-4883/ss-2/webserver logs: an error on the server ("unknown") has prevented the request from succeeding (get pods ss-2) Nov 26 20:57:15.732: INFO: Logging node info for node bootstrap-e2e-master Nov 26 20:57:15.773: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 04b1a5e9-52d6-4a70-89ff-f2505e084f23 6875 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 20:57:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 20:57:02 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 20:57:02 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 20:57:02 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 20:57:02 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.233.174.213,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a44d3cc5e5e4f2535b5861e9b365c743,SystemUUID:a44d3cc5-e5e4-f253-5b58-61e9b365c743,BootID:1d002924-7490-440d-a502-3c6b592d9227,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 20:57:15.776: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 20:57:15.821: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 20:57:15.866: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 26 20:57:15.866: INFO: Logging node info for node bootstrap-e2e-minion-group-01xg Nov 26 20:57:15.909: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-01xg e434adee-5fac-4a09-a6a5-0ccaa4657a2a 6928 0 2022-11-26 20:40:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-01xg kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-01xg topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-4017":"bootstrap-e2e-minion-group-01xg","csi-hostpath-multivolume-6507":"bootstrap-e2e-minion-group-01xg"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 20:49:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 20:56:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 20:57:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-01xg,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 20:56:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 20:56:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 20:56:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 20:56:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 20:56:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 20:56:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 20:56:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.181.19,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-01xg.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-01xg.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5894cd4cef6fe7904a572f6ea0d30d17,SystemUUID:5894cd4c-ef6f-e790-4a57-2f6ea0d30d17,BootID:974ca52a-d66e-4aa0-b46f-a8ab725b8a91,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 20:57:15.910: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-01xg Nov 26 20:57:15.957: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-01xg Nov 26 20:57:16.006: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-01xg: error trying to reach service: No agent available Nov 26 20:57:16.006: INFO: Logging node info for node bootstrap-e2e-minion-group-6k9m Nov 26 20:57:16.050: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6k9m 767c941b-a788-4a8d-ab83-b3689d62fa87 6917 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6k9m kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6k9m topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1205":"bootstrap-e2e-minion-group-6k9m"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 20:45:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 20:57:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 20:57:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-6k9m,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 20:57:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 20:57:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 20:57:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 20:57:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 20:57:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 20:57:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 20:57:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.90.109,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6k9m.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6k9m.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:435b7e3e9a4818e01965eecdf511b45e,SystemUUID:435b7e3e-9a48-18e0-1965-eecdf511b45e,BootID:325a5a2f-31cc-40bf-bd7b-a1e8ee820ba8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 20:57:16.051: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6k9m Nov 26 20:57:16.101: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6k9m Nov 26 20:57:16.148: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6k9m: error trying to reach service: No agent available Nov 26 20:57:16.148: INFO: Logging node info for node bootstrap-e2e-minion-group-b1s2 Nov 26 20:57:16.193: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-b1s2 158a2876-cfb9-4447-8a85-01261d1067a0 6927 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-b1s2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-b1s2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 20:53:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 20:57:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 20:57:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-b1s2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 20:57:09 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 20:57:09 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 20:57:09 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 20:57:09 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 20:57:09 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 20:57:09 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 20:57:09 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 20:52:31 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 20:52:31 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 20:52:31 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 20:52:31 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.78.235,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-b1s2.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-b1s2.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:000b31609596f7869267159501efda8f,SystemUUID:000b3160-9596-f786-9267-159501efda8f,BootID:5d963531-fbaa-4a0a-9b1c-e32bb0923886,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-99^e77b887d-6dcb-11ed-aef1-e2bb2c77ff06],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-99^e77b887d-6dcb-11ed-aef1-e2bb2c77ff06,DevicePath:,},},Config:nil,},} Nov 26 20:57:16.194: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-b1s2 Nov 26 20:57:16.239: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-b1s2 Nov 26 20:57:16.286: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-b1s2: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 STEP: Destroying namespace "statefulset-4883" for this suite. 11/26/22 20:57:16.286
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sScaling\sshould\shappen\sin\spredictable\sorder\sand\shalt\sif\sany\sstateful\spod\sis\sunhealthy\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/statefulset/rest.go:69 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc003c7a000}, 0xc00102cf00) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0xc003156a20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc00016db60, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x65cbc00?, 0xc0036fbde0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003c7a000}, 0x1, 0x1, 0xc00102cf00) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 +0x57bfrom junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 21:02:30.767 Nov 26 21:02:30.767: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/26/22 21:02:30.77 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 21:02:30.929 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 21:02:31.018 [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:98 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:113 STEP: Creating service test in namespace statefulset-3227 11/26/22 21:02:31.109 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] test/e2e/apps/statefulset.go:587 STEP: Initializing watcher for selector baz=blah,foo=bar 11/26/22 21:02:31.169 STEP: Creating stateful set ss in namespace statefulset-3227 11/26/22 21:02:31.221 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3227 11/26/22 21:02:31.281 Nov 26 21:02:31.357: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 21:02:41.433: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 21:02:51.413: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 21:03:01.447: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 21:03:11.419: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 21:03:21.417: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 21:03:31.412: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 21:03:41.411: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 21:03:51.417: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 21:04:01.411: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 21:04:11.430: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 21:04:21.416: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 21:04:31.437: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 21:05:41.400: INFO: Unexpected error: <*url.Error | 0xc003dbe180>: { Op: "Get", URL: "https://35.233.174.213/api/v1/namespaces/statefulset-3227/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar", Err: <http2.StreamError>{ StreamID: 53, Code: 2, Cause: <*errors.errorString | 0xc0001c94b0>{ s: "received from peer", }, }, } Nov 26 21:05:41.400: FAIL: Get "https://35.233.174.213/api/v1/namespaces/statefulset-3227/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar": stream error: stream ID 53; INTERNAL_ERROR; received from peer Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc003c7a000}, 0xc00102cf00) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0xc003156a20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc00016db60, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x65cbc00?, 0xc0036fbde0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003c7a000}, 0x1, 0x1, 0xc00102cf00) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 +0x57b E1126 21:05:41.400556 8103 runtime.go:79] Observed a panic: types.GinkgoError{Heading:"Your Test Panicked", Message:"When you, or your assertion library, calls Ginkgo's Fail(),\nGinkgo panics to prevent subsequent assertions from running.\n\nNormally Ginkgo rescues this panic so you shouldn't see it.\n\nHowever, if you make an assertion in a goroutine, Ginkgo can't capture the panic.\nTo circumvent this, you should call\n\n\tdefer GinkgoRecover()\n\nat the top of the goroutine that caused this panic.\n\nAlternatively, you may have made an assertion outside of a Ginkgo\nleaf node (e.g. in a container node or some out-of-band function) - please move your assertion to\nan appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...).", DocLink:"mental-model-how-ginkgo-handles-failure", CodeLocation:types.CodeLocation{FileName:"test/e2e/framework/statefulset/rest.go", LineNumber:69, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc003c7a000}, 0xc00102cf00)\n\ttest/e2e/framework/statefulset/rest.go:69 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1()\n\ttest/e2e/framework/statefulset/wait.go:37 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0xc003156a20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc00016db60, 0x2fdb16a?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x90?, 0x2fd9d05?, 0x20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x65cbc00?, 0xc0036fbde0?, 0x262a967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003c7a000}, 0x1, 0x1, 0xc00102cf00)\n\ttest/e2e/framework/statefulset/wait.go:35 +0xbd\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...)\n\ttest/e2e/framework/statefulset/wait.go:80\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.10()\n\ttest/e2e/apps/statefulset.go:632 +0x57b", CustomMessage:""}} (�[1m�[38;5;9mYour Test Panicked�[0m �[38;5;243mtest/e2e/framework/statefulset/rest.go:69�[0m When you, or your assertion library, calls Ginkgo's Fail(), Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. However, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. Alternatively, you may have made an assertion outside of a Ginkgo leaf node (e.g. in a container node or some out-of-band function) - please move your assertion to an appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...). �[1mLearn more at:�[0m �[38;5;14m�[4mhttp://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure�[0m ) goroutine 2038 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x70eb7e0?, 0xc000a52a80}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000a52a80?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x70eb7e0, 0xc000a52a80}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc001f2c0c0, 0xbe}, {0xc0036fb540?, 0x75b521a?, 0xc0036fb560?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.Fail({0xc003cb60b0, 0xa9}, {0xc0036fb5d8?, 0xc003cb60b0?, 0xc0036fb600?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fadf60, 0xc003dbe180}, {0x0?, 0xc003156020?, 0x10?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x801de88, 0xc003c7a000}, 0xc00102cf00) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0xc003156a20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc00016db60, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x90?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x65cbc00?, 0xc0036fbde0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x801de88?, 0xc003c7a000}, 0x1, 0x1, 0xc00102cf00) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 +0x57b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000b66900}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 +0x1b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 +0x98 created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 +0xe3d [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Nov 26 21:06:41.441: INFO: Deleting all statefulset in ns statefulset-3227 Nov 26 21:07:25.745: INFO: Scaling statefulset ss to 0 Nov 26 21:10:09.347: INFO: Waiting for statefulset status.replicas updated to 0 Nov 26 21:10:09.464: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 26 21:10:09.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 21:10:10.053 STEP: Collecting events from namespace "statefulset-3227". 11/26/22 21:10:10.053 STEP: Found 2 events. 11/26/22 21:10:10.117 Nov 26 21:10:10.117: INFO: At 2022-11-26 21:02:31 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Nov 26 21:10:10.117: INFO: At 2022-11-26 21:10:03 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Nov 26 21:10:10.211: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 21:10:10.211: INFO: Nov 26 21:10:10.301: INFO: Logging node info for node bootstrap-e2e-master Nov 26 21:10:10.402: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 04b1a5e9-52d6-4a70-89ff-f2505e084f23 8790 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 21:07:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 21:07:42 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 21:07:42 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 21:07:42 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 21:07:42 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.233.174.213,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a44d3cc5e5e4f2535b5861e9b365c743,SystemUUID:a44d3cc5-e5e4-f253-5b58-61e9b365c743,BootID:1d002924-7490-440d-a502-3c6b592d9227,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 21:10:10.403: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 21:10:10.479: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 21:10:40.536: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: context deadline exceeded: connection error: desc = "transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory" Nov 26 21:10:40.536: INFO: Logging node info for node bootstrap-e2e-minion-group-01xg Nov 26 21:10:40.596: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-01xg e434adee-5fac-4a09-a6a5-0ccaa4657a2a 8909 0 2022-11-26 20:40:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-01xg kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-01xg topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 20:49:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 21:07:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 21:09:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-01xg,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 21:07:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 21:07:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 21:07:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 21:07:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 21:09:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 21:09:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 21:09:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 21:09:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.181.19,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-01xg.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-01xg.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5894cd4cef6fe7904a572f6ea0d30d17,SystemUUID:5894cd4c-ef6f-e790-4a57-2f6ea0d30d17,BootID:974ca52a-d66e-4aa0-b46f-a8ab725b8a91,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 21:10:40.597: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-01xg Nov 26 21:10:40.680: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-01xg Nov 26 21:11:10.740: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-01xg: error trying to reach service: context deadline exceeded: connection error: desc = "transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory" Nov 26 21:11:10.740: INFO: Logging node info for node bootstrap-e2e-minion-group-6k9m Nov 26 21:11:10.809: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6k9m 767c941b-a788-4a8d-ab83-b3689d62fa87 8912 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6k9m kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6k9m topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 20:45:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 21:07:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 21:09:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-6k9m,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 21:07:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 21:07:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 21:07:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 21:07:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 21:09:13 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 21:09:13 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 21:09:13 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 21:09:13 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.90.109,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6k9m.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6k9m.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:435b7e3e9a4818e01965eecdf511b45e,SystemUUID:435b7e3e-9a48-18e0-1965-eecdf511b45e,BootID:325a5a2f-31cc-40bf-bd7b-a1e8ee820ba8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 21:11:10.810: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6k9m Nov 26 21:11:10.870: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6k9m Nov 26 21:11:11.956: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6k9m: error trying to reach service: No agent available Nov 26 21:11:11.956: INFO: Logging node info for node bootstrap-e2e-minion-group-b1s2 Nov 26 21:11:12.022: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-b1s2 158a2876-cfb9-4447-8a85-01261d1067a0 8814 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-b1s2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-b1s2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 20:53:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 21:07:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 21:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-b1s2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 21:07:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 21:07:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 21:07:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 21:07:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 21:07:51 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 21:07:51 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 21:07:51 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 21:07:51 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.78.235,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-b1s2.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-b1s2.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:000b31609596f7869267159501efda8f,SystemUUID:000b3160-9596-f786-9267-159501efda8f,BootID:5d963531-fbaa-4a0a-9b1c-e32bb0923886,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-99^e77b887d-6dcb-11ed-aef1-e2bb2c77ff06],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-99^e77b887d-6dcb-11ed-aef1-e2bb2c77ff06,DevicePath:,},},Config:nil,},} Nov 26 21:11:12.022: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-b1s2 Nov 26 21:11:12.091: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-b1s2 Nov 26 21:11:12.185: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-b1s2: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 STEP: Destroying namespace "statefulset-3227" for this suite. 11/26/22 21:11:12.185
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-auth\]\sServiceAccounts\sshould\ssupport\sInClusterConfig\swith\stoken\srotation\s\[Slow\]$'
test/e2e/auth/service_accounts.go:520 k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:520 +0x9abfrom junit_01.xml
[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 20:42:20.129 Nov 26 20:42:20.129: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename svcaccounts 11/26/22 20:42:20.131 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 20:42:20.646 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 20:42:20.784 [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:31 [It] should support InClusterConfig with token rotation [Slow] test/e2e/auth/service_accounts.go:432 Nov 26 20:42:20.955: INFO: created pod Nov 26 20:42:20.955: INFO: Waiting up to 1m0s for 1 pods to be running and ready: [inclusterclient] Nov 26 20:42:20.955: INFO: Waiting up to 1m0s for pod "inclusterclient" in namespace "svcaccounts-2076" to be "running and ready" Nov 26 20:42:21.011: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 56.154471ms Nov 26 20:42:21.011: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-6k9m' to be 'Running' but was 'Pending' Nov 26 20:42:23.056: INFO: Pod "inclusterclient": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101061705s Nov 26 20:42:23.056: INFO: Error evaluating pod condition running and ready: want pod 'inclusterclient' on 'bootstrap-e2e-minion-group-6k9m' to be 'Running' but was 'Pending' Nov 26 20:42:26.762: INFO: Pod "inclusterclient": Phase="Running", Reason="", readiness=true. Elapsed: 5.807099647s Nov 26 20:42:26.762: INFO: Pod "inclusterclient" satisfied condition "running and ready" Nov 26 20:42:26.762: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [inclusterclient] Nov 26 20:42:26.762: INFO: pod is ready Nov 26 20:43:26.763: INFO: polling logs Nov 26 20:43:26.814: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 Nov 26 20:44:26.763: INFO: polling logs Nov 26 20:44:26.843: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 Nov 26 20:45:26.763: INFO: polling logs Nov 26 20:45:26.829: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 Nov 26 20:46:26.763: INFO: polling logs Nov 26 20:46:44.243: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 5m0.748s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 5m0.001s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:47:26.763: INFO: polling logs Nov 26 20:47:26.811: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 5m20.75s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 5m20.002s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 5m40.752s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 5m40.005s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 6m0.755s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 6m0.007s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:48:26.763: INFO: polling logs ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 6m20.757s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 6m20.009s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a76480, 0xc00169a500) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002a4b080, 0xc00169a500, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003720000?}, 0xc00169a500?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003720000, 0xc00169a500) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc001b96de0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0020c5e90, 0xc00032ba00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0015f6740, 0xc00032b400) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00032b400, {0x7fad100, 0xc0015f6740}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0020c5ec0, 0xc00032b400, {0x7f26dfd7f5b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0020c5ec0, 0xc00032b400) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00032a400, {0x7fe0bc8, 0xc0000820e0}, 0x7f26b440aa80?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00032a400, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc003981a00?}, {0xc004a30010, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 6m40.759s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 6m40.011s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a76480, 0xc00169a500) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002a4b080, 0xc00169a500, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003720000?}, 0xc00169a500?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003720000, 0xc00169a500) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc001b96de0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0020c5e90, 0xc00032ba00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0015f6740, 0xc00032b400) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00032b400, {0x7fad100, 0xc0015f6740}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0020c5ec0, 0xc00032b400, {0x7f26dfd7f5b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0020c5ec0, 0xc00032b400) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00032a400, {0x7fe0bc8, 0xc0000820e0}, 0x7f26b440aa80?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00032a400, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc003981a00?}, {0xc004a30010, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:49:01.529: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 7m0.761s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 7m0.013s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:49:26.763: INFO: polling logs Nov 26 20:49:26.814: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 7m20.762s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 7m20.014s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 7m40.764s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 7m40.017s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 8m0.766s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 8m0.019s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:50:26.763: INFO: polling logs ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 8m20.768s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 8m20.021s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a76480, 0xc00032aa00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002a4b080, 0xc00032aa00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003720000?}, 0xc00032aa00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003720000, 0xc00032aa00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc0042581b0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0020c5e90, 0xc00032a400) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0015f6740, 0xc00141fc00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00141fc00, {0x7fad100, 0xc0015f6740}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0020c5ec0, 0xc00141fc00, {0x7f26dfd7f108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0020c5ec0, 0xc00141fc00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00141e900, {0x7fe0bc8, 0xc0000820e0}, 0x7f26b440bea8?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00141e900, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc003981a00?}, {0xc004a30010, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 8m40.77s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 8m40.022s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a76480, 0xc00032aa00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002a4b080, 0xc00032aa00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003720000?}, 0xc00032aa00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003720000, 0xc00032aa00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc0042581b0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0020c5e90, 0xc00032a400) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0015f6740, 0xc00141fc00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00141fc00, {0x7fad100, 0xc0015f6740}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0020c5ec0, 0xc00141fc00, {0x7f26dfd7f108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0020c5ec0, 0xc00141fc00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00141e900, {0x7fe0bc8, 0xc0000820e0}, 0x7f26b440bea8?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00141e900, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc003981a00?}, {0xc004a30010, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 9m0.772s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 9m0.024s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a76480, 0xc00032aa00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002a4b080, 0xc00032aa00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003720000?}, 0xc00032aa00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003720000, 0xc00032aa00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc0042581b0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0020c5e90, 0xc00032a400) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0015f6740, 0xc00141fc00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00141fc00, {0x7fad100, 0xc0015f6740}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0020c5ec0, 0xc00141fc00, {0x7f26dfd7f108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0020c5ec0, 0xc00141fc00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00141e900, {0x7fe0bc8, 0xc0000820e0}, 0x7f26b440bea8?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00141e900, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc003981a00?}, {0xc004a30010, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 9m20.777s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 9m20.029s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a76480, 0xc00032aa00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002a4b080, 0xc00032aa00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003720000?}, 0xc00032aa00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003720000, 0xc00032aa00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc0042581b0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0020c5e90, 0xc00032a400) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0015f6740, 0xc00141fc00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00141fc00, {0x7fad100, 0xc0015f6740}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0020c5ec0, 0xc00141fc00, {0x7f26dfd7f108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0020c5ec0, 0xc00141fc00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00141e900, {0x7fe0bc8, 0xc0000820e0}, 0x7f26b440bea8?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00141e900, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc003981a00?}, {0xc004a30010, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:51:53.035: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 9m40.779s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 9m40.031s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 10m0.781s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 10m0.034s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:52:26.763: INFO: polling logs Nov 26 20:52:26.809: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 10m20.783s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 10m20.036s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 10m40.786s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 10m40.038s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 11m0.787s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 11m0.04s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:53:26.763: INFO: polling logs Nov 26 20:53:26.810: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 11m20.79s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 11m20.042s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 11m40.791s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 11m40.043s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 12m0.794s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 12m0.046s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:54:26.763: INFO: polling logs ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 12m20.796s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 12m20.049s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a76480, 0xc00169bf00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002a4b080, 0xc00169bf00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003720000?}, 0xc00169bf00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003720000, 0xc00169bf00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc001b96de0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0020c5e90, 0xc00169b900) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0015f6740, 0xc00169b300) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00169b300, {0x7fad100, 0xc0015f6740}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0020c5ec0, 0xc00169b300, {0x7f26dfd7f5b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0020c5ec0, 0xc00169b300) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}, 0x7f26b440aa80?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc003981a00?}, {0xc004a30010, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 12m40.799s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 12m40.051s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a76480, 0xc00169bf00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002a4b080, 0xc00169bf00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003720000?}, 0xc00169bf00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003720000, 0xc00169bf00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc001b96de0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0020c5e90, 0xc00169b900) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0015f6740, 0xc00169b300) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00169b300, {0x7fad100, 0xc0015f6740}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0020c5ec0, 0xc00169b300, {0x7f26dfd7f5b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0020c5ec0, 0xc00169b300) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}, 0x7f26b440aa80?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc003981a00?}, {0xc004a30010, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 13m0.802s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 13m0.055s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a76480, 0xc00169bf00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002a4b080, 0xc00169bf00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003720000?}, 0xc00169bf00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003720000, 0xc00169bf00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc001b96de0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0020c5e90, 0xc00169b900) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0015f6740, 0xc00169b300) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00169b300, {0x7fad100, 0xc0015f6740}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0020c5ec0, 0xc00169b300, {0x7f26dfd7f5b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0020c5ec0, 0xc00169b300) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}, 0x7f26b440aa80?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc003981a00?}, {0xc004a30010, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 13m20.815s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 13m20.068s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a76480, 0xc00169bf00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002a4b080, 0xc00169bf00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003720000?}, 0xc00169bf00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003720000, 0xc00169bf00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc001b96de0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0020c5e90, 0xc00169b900) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0015f6740, 0xc00169b300) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00169b300, {0x7fad100, 0xc0015f6740}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0020c5ec0, 0xc00169b300, {0x7f26dfd7f5b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0020c5ec0, 0xc00169b300) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}, 0x7f26b440aa80?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc003981a00?}, {0xc004a30010, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 13m40.835s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 13m40.088s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a76480, 0xc00169bf00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002a4b080, 0xc00169bf00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003720000?}, 0xc00169bf00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003720000, 0xc00169bf00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc001b96de0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0020c5e90, 0xc00169b900) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0015f6740, 0xc00169b300) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00169b300, {0x7fad100, 0xc0015f6740}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0020c5ec0, 0xc00169b300, {0x7f26dfd7f5b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0020c5ec0, 0xc00169b300) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}, 0x7f26b440aa80?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc003981a00?}, {0xc004a30010, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 14m0.844s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 14m0.097s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select, 2 minutes] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a76480, 0xc00169bf00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002a4b080, 0xc00169bf00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003720000?}, 0xc00169bf00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003720000, 0xc00169bf00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc001b96de0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0020c5e90, 0xc00169b900) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0015f6740, 0xc00169b300) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00169b300, {0x7fad100, 0xc0015f6740}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0020c5ec0, 0xc00169b300, {0x7f26dfd7f5b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0020c5ec0, 0xc00169b300) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}, 0x7f26b440aa80?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc003981a00?}, {0xc004a30010, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 14m20.859s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 14m20.111s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select, 2 minutes] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a76480, 0xc00169bf00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002a4b080, 0xc00169bf00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003720000?}, 0xc00169bf00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003720000, 0xc00169bf00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc001b96de0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0020c5e90, 0xc00169b900) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0015f6740, 0xc00169b300) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00169b300, {0x7fad100, 0xc0015f6740}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0020c5ec0, 0xc00169b300, {0x7f26dfd7f5b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0020c5ec0, 0xc00169b300) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}, 0x7f26b440aa80?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc003981a00?}, {0xc004a30010, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 14m40.864s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 14m40.117s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select, 2 minutes] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a76480, 0xc00169bf00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002a4b080, 0xc00169bf00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003720000?}, 0xc00169bf00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003720000, 0xc00169bf00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc001b96de0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0020c5e90, 0xc00169b900) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0015f6740, 0xc00169b300) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00169b300, {0x7fad100, 0xc0015f6740}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0020c5ec0, 0xc00169b300, {0x7f26dfd7f5b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0020c5ec0, 0xc00169b300) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}, 0x7f26b440aa80?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00169a500, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc003981a00?}, {0xc004a30010, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:57:11.473: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 15m0.868s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 15m0.12s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:57:26.763: INFO: polling logs Nov 26 20:57:26.862: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 15m20.874s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 15m20.127s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 15m40.88s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 15m40.132s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 16m0.891s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 16m0.144s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:58:26.763: INFO: polling logs Nov 26 20:58:36.395: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 16m20.907s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 16m20.16s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 16m40.917s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 16m40.17s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 17m0.93s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 17m0.182s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:59:26.763: INFO: polling logs Nov 26 20:59:26.812: INFO: Retrying. Still waiting to see more unique tokens: got=1, want=2 ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 17m20.933s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 17m20.186s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 17m40.938s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 17m40.19s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 18m0.941s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 18m0.194s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 21:00:26.763: INFO: polling logs ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 18m20.945s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 18m20.198s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a76480, 0xc00169b900) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002a4b080, 0xc00169b900, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003720000?}, 0xc00169b900?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003720000, 0xc00169b900) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc0003667b0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0020c5e90, 0xc00169b300) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0015f6740, 0xc00169ac00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00169ac00, {0x7fad100, 0xc0015f6740}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0020c5ec0, 0xc00169ac00, {0x7f26dfd7f108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0020c5ec0, 0xc00169ac00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc000767200, {0x7fe0bc8, 0xc0000820e0}, 0x7f26b440bea8?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc000767200, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/test/e2e/framework/pod.getPodLogsInternal({0x801de88?, 0xc003981a00?}, {0xc004a30010, 0x10}, {0x75e13f6, 0xf}, {0x75e13f6, 0xf}, 0x0, 0x0, ...) test/e2e/framework/pod/resource.go:572 k8s.io/kubernetes/test/e2e/framework/pod.GetPodLogs(...) test/e2e/framework/pod/resource.go:543 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6.1() test/e2e/auth/service_accounts.go:505 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 21:00:56.870: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 18m40.95s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 18m40.202s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 19m0.953s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 19m0.205s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 21:01:26.763: INFO: polling logs Nov 26 21:01:26.928: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 19m20.956s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 19m20.209s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 19m40.963s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 19m40.215s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] (Spec Runtime: 20m0.965s) test/e2e/auth/service_accounts.go:432 In [It] (Node Runtime: 20m0.217s) test/e2e/auth/service_accounts.go:432 Spec Goroutine goroutine 554 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc0050df5f0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xb8?, 0x2fd9d05?, 0x18?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc004a3de08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x75b6f82?, 0x4?, 0x75d300d?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:503 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003730d80, 0xc0038b0c00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 21:02:26.763: INFO: polling logs Nov 26 21:02:26.928: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) Nov 26 21:02:26.928: INFO: polling logs Nov 26 21:02:27.104: INFO: Error pulling logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) Nov 26 21:02:27.104: FAIL: Unexpected error: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/auth.glob..func5.6() test/e2e/auth/service_accounts.go:520 +0x9ab [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 Nov 26 21:02:27.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 21:02:27.208 STEP: Collecting events from namespace "svcaccounts-2076". 11/26/22 21:02:27.208 STEP: Found 5 events. 11/26/22 21:02:27.311 Nov 26 21:02:27.312: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for inclusterclient: { } Scheduled: Successfully assigned svcaccounts-2076/inclusterclient to bootstrap-e2e-minion-group-6k9m Nov 26 21:02:27.312: INFO: At 2022-11-26 20:42:22 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-6k9m} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 21:02:27.312: INFO: At 2022-11-26 20:42:22 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-6k9m} Created: Created container inclusterclient Nov 26 21:02:27.312: INFO: At 2022-11-26 20:42:22 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-6k9m} Started: Started container inclusterclient Nov 26 21:02:27.312: INFO: At 2022-11-26 20:43:26 +0000 UTC - event for inclusterclient: {kubelet bootstrap-e2e-minion-group-6k9m} Killing: Stopping container inclusterclient Nov 26 21:02:27.417: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 21:02:27.417: INFO: inclusterclient bootstrap-e2e-minion-group-6k9m Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:42:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:27 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:27 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:42:20 +0000 UTC }] Nov 26 21:02:27.417: INFO: Nov 26 21:02:27.507: INFO: Unable to fetch svcaccounts-2076/inclusterclient/inclusterclient logs: an error on the server ("unknown") has prevented the request from succeeding (get pods inclusterclient) Nov 26 21:02:27.594: INFO: Logging node info for node bootstrap-e2e-master Nov 26 21:02:27.656: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 04b1a5e9-52d6-4a70-89ff-f2505e084f23 8374 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 21:02:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 21:02:17 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 21:02:17 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 21:02:17 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 21:02:17 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.233.174.213,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a44d3cc5e5e4f2535b5861e9b365c743,SystemUUID:a44d3cc5-e5e4-f253-5b58-61e9b365c743,BootID:1d002924-7490-440d-a502-3c6b592d9227,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 21:02:27.656: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 21:02:27.705: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 21:02:27.783: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 26 21:02:27.783: INFO: Logging node info for node bootstrap-e2e-minion-group-01xg Nov 26 21:02:27.838: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-01xg e434adee-5fac-4a09-a6a5-0ccaa4657a2a 8207 0 2022-11-26 20:40:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-01xg kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-01xg topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 20:49:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 20:58:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-26 21:01:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-01xg,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 21:01:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 21:01:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 21:01:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 21:01:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 21:01:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 21:01:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 21:01:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 20:58:59 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 20:58:59 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 20:58:59 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 20:58:59 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.181.19,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-01xg.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-01xg.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5894cd4cef6fe7904a572f6ea0d30d17,SystemUUID:5894cd4c-ef6f-e790-4a57-2f6ea0d30d17,BootID:974ca52a-d66e-4aa0-b46f-a8ab725b8a91,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 21:02:27.839: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-01xg Nov 26 21:02:27.915: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-01xg Nov 26 21:02:28.011: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-01xg: error trying to reach service: No agent available Nov 26 21:02:28.011: INFO: Logging node info for node bootstrap-e2e-minion-group-6k9m Nov 26 21:02:28.066: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6k9m 767c941b-a788-4a8d-ab83-b3689d62fa87 8291 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6k9m kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6k9m topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 20:45:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 20:58:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-26 21:02:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-6k9m,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 21:02:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 21:02:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 21:02:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 21:02:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 21:02:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 21:02:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 21:02:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 20:58:59 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 20:58:59 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 20:58:59 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 20:58:59 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.90.109,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6k9m.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6k9m.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:435b7e3e9a4818e01965eecdf511b45e,SystemUUID:435b7e3e-9a48-18e0-1965-eecdf511b45e,BootID:325a5a2f-31cc-40bf-bd7b-a1e8ee820ba8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 21:02:28.067: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6k9m Nov 26 21:02:28.184: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6k9m Nov 26 21:02:28.275: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6k9m: error trying to reach service: No agent available Nov 26 21:02:28.275: INFO: Logging node info for node bootstrap-e2e-minion-group-b1s2 Nov 26 21:02:28.366: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-b1s2 158a2876-cfb9-4447-8a85-01261d1067a0 8288 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-b1s2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-b1s2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 20:53:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 20:57:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {node-problem-detector Update v1 2022-11-26 21:02:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-b1s2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 21:02:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 21:02:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 21:02:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 21:02:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 21:02:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 21:02:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 21:02:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 20:57:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 20:57:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 20:57:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 20:57:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.78.235,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-b1s2.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-b1s2.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:000b31609596f7869267159501efda8f,SystemUUID:000b3160-9596-f786-9267-159501efda8f,BootID:5d963531-fbaa-4a0a-9b1c-e32bb0923886,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-99^e77b887d-6dcb-11ed-aef1-e2bb2c77ff06],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-99^e77b887d-6dcb-11ed-aef1-e2bb2c77ff06,DevicePath:,},},Config:nil,},} Nov 26 21:02:28.366: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-b1s2 Nov 26 21:02:28.437: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-b1s2 Nov 26 21:02:28.540: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-b1s2: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193 STEP: Destroying namespace "svcaccounts-2076" for this suite. 11/26/22 21:02:28.54
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never\,\sbut\swith\s\-\-rm$'
test/e2e/kubectl/kubectl.go:415 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245 There were additional failures detected after the initial failure: [FAILED] Nov 26 21:06:13.967: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4639 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (Timeout): error when deleting "STDIN": Timeout: request did not complete within requested timeout - context deadline exceeded error: exit status 1 In [AfterEach] at: test/e2e/framework/kubectl/builder.go:87from junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 21:01:21.979 Nov 26 21:01:21.979: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/26/22 21:01:21.983 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 21:01:22.237 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 21:01:22.334 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/26/22 21:01:22.463 Nov 26 21:01:22.464: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4639 create -f -' Nov 26 21:01:23.647: INFO: stderr: "" Nov 26 21:01:23.647: INFO: stdout: "pod/httpd created\n" Nov 26 21:01:23.647: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 26 21:01:23.647: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-4639" to be "running and ready" Nov 26 21:01:23.722: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 74.389501ms Nov 26 21:01:23.722: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:25.779: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131630864s Nov 26 21:01:25.779: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:27.789: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141393279s Nov 26 21:01:27.789: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:29.808: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160362829s Nov 26 21:01:29.808: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:31.804: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156837479s Nov 26 21:01:31.804: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:33.789: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.142013838s Nov 26 21:01:33.789: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:35.868: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.221213421s Nov 26 21:01:35.868: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:37.786: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.138735164s Nov 26 21:01:37.786: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:39.788: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.140287546s Nov 26 21:01:39.788: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:41.787: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.139645541s Nov 26 21:01:41.787: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:43.777: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.129850987s Nov 26 21:01:43.777: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:45.802: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.154616428s Nov 26 21:01:45.802: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:47.802: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.155011479s Nov 26 21:01:47.802: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:49.802: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 26.155036893s Nov 26 21:01:49.802: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:51.792: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 28.144743486s Nov 26 21:01:51.792: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:53.784: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 30.13704975s Nov 26 21:01:53.784: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:55.781: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.134012924s Nov 26 21:01:55.781: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:57.797: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.150118196s Nov 26 21:01:57.797: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:01:59.779: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 36.131288951s Nov 26 21:01:59.779: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:01.786: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 38.139022254s Nov 26 21:02:01.786: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:03.868: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 40.221074172s Nov 26 21:02:03.868: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:05.796: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 42.149099375s Nov 26 21:02:05.796: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:07.779: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 44.131565837s Nov 26 21:02:07.779: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:09.821: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 46.173499078s Nov 26 21:02:09.821: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:11.878: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 48.230595366s Nov 26 21:02:11.878: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:13.776: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 50.129094707s Nov 26 21:02:13.776: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:15.810: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 52.162962422s Nov 26 21:02:15.810: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:17.797: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 54.149486798s Nov 26 21:02:17.797: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:19.845: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 56.197921013s Nov 26 21:02:19.845: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:21.805: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 58.158264363s Nov 26 21:02:21.806: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:23.770: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.12286027s Nov 26 21:02:23.770: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:25.772: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.125034231s Nov 26 21:02:25.772: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:27.778: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.130761663s Nov 26 21:02:27.778: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:29.817: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.169982893s Nov 26 21:02:29.817: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:31.781: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.13395798s Nov 26 21:02:31.781: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:33.819: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.171342911s Nov 26 21:02:33.819: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:35.797: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.149906538s Nov 26 21:02:35.797: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:37.804: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.157018638s Nov 26 21:02:37.804: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:39.783: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.135571033s Nov 26 21:02:39.783: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:41.802: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.154945865s Nov 26 21:02:41.802: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:43.770: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.122469493s Nov 26 21:02:43.770: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:45.823: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.175296298s Nov 26 21:02:45.823: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:47.773: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.125455906s Nov 26 21:02:47.773: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:49.804: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.157024033s Nov 26 21:02:49.804: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:51.797: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.149674417s Nov 26 21:02:51.797: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:53.777: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.129479665s Nov 26 21:02:53.777: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:55.806: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.158945582s Nov 26 21:02:55.806: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:57.774: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.127020242s Nov 26 21:02:57.774: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:02:59.814: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.167064849s Nov 26 21:02:59.814: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:01.787: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.13997712s Nov 26 21:03:01.787: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:03.774: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.126359391s Nov 26 21:03:03.774: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:05.794: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.147163691s Nov 26 21:03:05.794: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:07.777: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.129722271s Nov 26 21:03:07.777: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:09.785: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.138257351s Nov 26 21:03:09.785: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:11.801: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.154045499s Nov 26 21:03:11.801: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:13.773: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.125898753s Nov 26 21:03:13.773: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:15.858: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.211012173s Nov 26 21:03:15.858: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:17.786: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.139115433s Nov 26 21:03:17.786: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:19.788: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.140889715s Nov 26 21:03:19.788: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:21.790: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.142843637s Nov 26 21:03:21.790: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:23.783: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.135371012s Nov 26 21:03:23.783: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:25.785: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.137401236s Nov 26 21:03:25.785: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:27.784: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.136769104s Nov 26 21:03:27.784: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:29.813: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.165388916s Nov 26 21:03:29.813: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:31.785: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.137359647s Nov 26 21:03:31.785: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:33.777: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.129336048s Nov 26 21:03:33.777: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:35.782: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.135212914s Nov 26 21:03:35.782: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:37.797: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.149478632s Nov 26 21:03:37.797: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:39.877: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.230170566s Nov 26 21:03:39.877: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:41.795: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.147463921s Nov 26 21:03:41.795: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:43.808: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.16078819s Nov 26 21:03:43.808: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:45.804: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.157077796s Nov 26 21:03:45.804: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:47.794: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.146821259s Nov 26 21:03:47.794: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:49.782: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.134835907s Nov 26 21:03:49.782: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:51.825: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.177385279s Nov 26 21:03:51.825: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:53.787: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.139559408s Nov 26 21:03:53.787: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:55.803: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.156254334s Nov 26 21:03:55.803: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:57.804: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.157163533s Nov 26 21:03:57.804: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:03:59.796: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.148968573s Nov 26 21:03:59.796: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:01.801: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.153575284s Nov 26 21:04:01.801: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:03.789: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.141637981s Nov 26 21:04:03.789: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:05.817: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.169729772s Nov 26 21:04:05.817: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:07.820: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.173212438s Nov 26 21:04:07.820: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:09.799: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.151957184s Nov 26 21:04:09.799: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:11.809: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.161466989s Nov 26 21:04:11.809: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:13.815: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.167720784s Nov 26 21:04:13.815: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:15.781: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.133660812s Nov 26 21:04:15.781: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:17.786: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.138746706s Nov 26 21:04:17.786: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:19.799: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.152107833s Nov 26 21:04:19.799: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:21.778: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.131162722s Nov 26 21:04:21.778: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:23.789: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.142052912s Nov 26 21:04:23.789: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:25.783: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.135609932s Nov 26 21:04:25.783: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:27.811: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.163543415s Nov 26 21:04:27.811: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:29.787: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.139846792s Nov 26 21:04:29.787: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:31.782: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.134682031s Nov 26 21:04:31.782: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:33.774: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.127233021s Nov 26 21:04:33.774: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:35.790: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.143201871s Nov 26 21:04:35.790: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:04:37.771: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.123416575s Nov 26 21:04:37.771: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on '' to be 'Running' but was 'Pending' Nov 26 21:05:39.765: INFO: Encountered non-retryable error while getting pod kubectl-4639/httpd: Get "https://35.233.174.213/api/v1/namespaces/kubectl-4639/pods/httpd": stream error: stream ID 1115; INTERNAL_ERROR; received from peer Nov 26 21:05:39.765: INFO: Pod httpd failed to be running and ready. Nov 26 21:05:39.765: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [httpd] Nov 26 21:05:39.765: FAIL: Expected <bool>: false to equal <bool>: true Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245 [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/26/22 21:05:39.765 Nov 26 21:05:39.765: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4639 delete --grace-period=0 --force -f -' Nov 26 21:06:13.966: INFO: rc: 1 Nov 26 21:06:13.966: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc004bd2010>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4639 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nError from server (Timeout): error when deleting \"STDIN\": Timeout: request did not complete within requested timeout - context deadline exceeded\n\nerror:\nexit status 1", }, Code: 1, } Nov 26 21:06:13.967: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=kubectl-4639 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (Timeout): error when deleting "STDIN": Timeout: request did not complete within requested timeout - context deadline exceeded error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc000cea160?, 0x0?}, {0xc004ee4460, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc004ee4460, 0xc}, {0xc0016706e0, 0x145}, {0xc002abfec0?, 0x8?, 0x7f05467e75b8?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc0016706e0, 0x145}, {0xc004ee4460, 0xc}, {0xc004bd3550, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 26 21:06:13.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 21:07:14.009 STEP: Collecting events from namespace "kubectl-4639". 11/26/22 21:07:14.009 STEP: Found 0 events. 11/26/22 21:07:14.052 Nov 26 21:07:49.258: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 21:07:49.258: INFO: httpd Pending [] Nov 26 21:07:49.258: INFO: Nov 26 21:07:49.559: INFO: Logging node info for node bootstrap-e2e-master Nov 26 21:07:49.669: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 04b1a5e9-52d6-4a70-89ff-f2505e084f23 8790 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 21:07:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 21:07:42 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 21:07:42 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 21:07:42 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 21:07:42 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.233.174.213,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a44d3cc5e5e4f2535b5861e9b365c743,SystemUUID:a44d3cc5-e5e4-f253-5b58-61e9b365c743,BootID:1d002924-7490-440d-a502-3c6b592d9227,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 21:07:49.669: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 21:07:50.125: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 21:07:50.523: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:50.523: INFO: Container kube-apiserver ready: true, restart count 0 Nov 26 21:07:50.523: INFO: metadata-proxy-v0.1-cbwjf started at 2022-11-26 20:40:24 +0000 UTC (0+2 container statuses recorded) Nov 26 21:07:50.523: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 21:07:50.523: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 21:07:50.523: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:50.523: INFO: Container kube-controller-manager ready: false, restart count 7 Nov 26 21:07:50.523: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:50.523: INFO: Container kube-scheduler ready: false, restart count 8 Nov 26 21:07:50.523: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:50.523: INFO: Container etcd-container ready: true, restart count 1 Nov 26 21:07:50.523: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:50.523: INFO: Container etcd-container ready: true, restart count 7 Nov 26 21:07:50.523: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:50.523: INFO: Container konnectivity-server-container ready: true, restart count 4 Nov 26 21:07:50.523: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 20:39:56 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:50.523: INFO: Container kube-addon-manager ready: true, restart count 2 Nov 26 21:07:50.523: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 20:39:56 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:50.523: INFO: Container l7-lb-controller ready: false, restart count 8 Nov 26 21:07:51.459: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 21:07:51.459: INFO: Logging node info for node bootstrap-e2e-minion-group-01xg Nov 26 21:07:51.578: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-01xg e434adee-5fac-4a09-a6a5-0ccaa4657a2a 8787 0 2022-11-26 20:40:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-01xg kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-01xg topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 20:49:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 21:04:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-26 21:07:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-01xg,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 21:07:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 21:07:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 21:07:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 21:07:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 21:04:05 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 21:04:05 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 21:04:05 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 21:04:05 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.181.19,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-01xg.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-01xg.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5894cd4cef6fe7904a572f6ea0d30d17,SystemUUID:5894cd4c-ef6f-e790-4a57-2f6ea0d30d17,BootID:974ca52a-d66e-4aa0-b46f-a8ab725b8a91,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 21:07:51.578: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-01xg Nov 26 21:07:51.808: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-01xg Nov 26 21:07:51.994: INFO: pod-subpath-test-dynamicpv-q2l6 started at 2022-11-26 20:42:23 +0000 UTC (1+2 container statuses recorded) Nov 26 21:07:51.994: INFO: Init container init-volume-dynamicpv-q2l6 ready: true, restart count 1 Nov 26 21:07:51.994: INFO: Container test-container-subpath-dynamicpv-q2l6 ready: false, restart count 1 Nov 26 21:07:51.994: INFO: Container test-container-volume-dynamicpv-q2l6 ready: false, restart count 1 Nov 26 21:07:51.994: INFO: metadata-proxy-v0.1-h8gjd started at 2022-11-26 20:40:22 +0000 UTC (0+2 container statuses recorded) Nov 26 21:07:51.994: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 21:07:51.994: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 21:07:51.994: INFO: konnectivity-agent-bgjhj started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:51.994: INFO: Container konnectivity-agent ready: true, restart count 9 Nov 26 21:07:51.994: INFO: kube-proxy-bootstrap-e2e-minion-group-01xg started at 2022-11-26 20:40:21 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:51.994: INFO: Container kube-proxy ready: true, restart count 7 Nov 26 21:07:51.994: INFO: coredns-6d97d5ddb-b4rcb started at 2022-11-26 20:40:44 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:51.994: INFO: Container coredns ready: false, restart count 9 Nov 26 21:07:52.532: INFO: Latency metrics for node bootstrap-e2e-minion-group-01xg Nov 26 21:07:52.532: INFO: Logging node info for node bootstrap-e2e-minion-group-6k9m Nov 26 21:07:52.582: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6k9m 767c941b-a788-4a8d-ab83-b3689d62fa87 8788 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6k9m kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6k9m topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 20:45:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 21:04:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {node-problem-detector Update v1 2022-11-26 21:07:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-6k9m,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 21:07:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 21:07:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 21:07:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 21:07:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 21:04:07 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 21:04:07 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 21:04:07 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 21:04:07 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.90.109,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6k9m.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6k9m.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:435b7e3e9a4818e01965eecdf511b45e,SystemUUID:435b7e3e-9a48-18e0-1965-eecdf511b45e,BootID:325a5a2f-31cc-40bf-bd7b-a1e8ee820ba8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 21:07:52.583: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6k9m Nov 26 21:07:52.635: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6k9m Nov 26 21:07:52.899: INFO: kube-proxy-bootstrap-e2e-minion-group-6k9m started at 2022-11-26 20:40:22 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:52.899: INFO: Container kube-proxy ready: false, restart count 8 Nov 26 21:07:52.899: INFO: kube-dns-autoscaler-5f6455f985-mcwh8 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:52.899: INFO: Container autoscaler ready: true, restart count 8 Nov 26 21:07:52.899: INFO: l7-default-backend-8549d69d99-c89m7 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:52.899: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 21:07:52.899: INFO: volume-snapshot-controller-0 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:52.899: INFO: Container volume-snapshot-controller ready: false, restart count 7 Nov 26 21:07:52.899: INFO: coredns-6d97d5ddb-l2p8d started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:52.899: INFO: Container coredns ready: false, restart count 9 Nov 26 21:07:52.899: INFO: metadata-proxy-v0.1-ltr6z started at 2022-11-26 20:40:23 +0000 UTC (0+2 container statuses recorded) Nov 26 21:07:52.899: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 21:07:52.899: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 21:07:52.899: INFO: konnectivity-agent-dvrb2 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:52.899: INFO: Container konnectivity-agent ready: false, restart count 8 Nov 26 21:07:53.398: INFO: Latency metrics for node bootstrap-e2e-minion-group-6k9m Nov 26 21:07:53.398: INFO: Logging node info for node bootstrap-e2e-minion-group-b1s2 Nov 26 21:07:53.477: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-b1s2 158a2876-cfb9-4447-8a85-01261d1067a0 8814 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-b1s2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-b1s2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 20:53:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 21:07:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 21:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-b1s2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 21:07:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 21:07:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 21:07:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 21:07:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 21:07:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 21:07:51 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 21:07:51 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 21:07:51 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 21:07:51 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.78.235,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-b1s2.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-b1s2.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:000b31609596f7869267159501efda8f,SystemUUID:000b3160-9596-f786-9267-159501efda8f,BootID:5d963531-fbaa-4a0a-9b1c-e32bb0923886,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-99^e77b887d-6dcb-11ed-aef1-e2bb2c77ff06],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-99^e77b887d-6dcb-11ed-aef1-e2bb2c77ff06,DevicePath:,},},Config:nil,},} Nov 26 21:07:53.478: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-b1s2 Nov 26 21:07:53.536: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-b1s2 Nov 26 21:07:53.760: INFO: metadata-proxy-v0.1-6l49k started at 2022-11-26 20:40:23 +0000 UTC (0+2 container statuses recorded) Nov 26 21:07:53.760: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 21:07:53.760: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 21:07:53.760: INFO: konnectivity-agent-q4nqj started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:53.760: INFO: Container konnectivity-agent ready: true, restart count 8 Nov 26 21:07:53.760: INFO: metrics-server-v0.5.2-867b8754b9-xh56x started at 2022-11-26 20:41:00 +0000 UTC (0+2 container statuses recorded) Nov 26 21:07:53.760: INFO: Container metrics-server ready: false, restart count 9 Nov 26 21:07:53.760: INFO: Container metrics-server-nanny ready: false, restart count 9 Nov 26 21:07:53.760: INFO: kube-proxy-bootstrap-e2e-minion-group-b1s2 started at 2022-11-26 20:40:22 +0000 UTC (0+1 container statuses recorded) Nov 26 21:07:53.760: INFO: Container kube-proxy ready: true, restart count 8 Nov 26 21:07:54.104: INFO: Latency metrics for node bootstrap-e2e-minion-group-b1s2 [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-4639" for this suite. 11/26/22 21:07:54.105
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sAddon\supdate\sshould\spropagate\sadd\-on\sfile\schanges\s\[Slow\]$'
test/e2e/cloud/gcp/addon_update.go:353 k8s.io/kubernetes/test/e2e/cloud/gcp.waitForReplicationControllerInAddonTest({0x801de88?, 0xc004fd29c0?}, {0x75ce977?, 0x4?}, {0x760025e?, 0xc003ef5e30?}, 0x1d?) test/e2e/cloud/gcp/addon_update.go:353 +0x54 k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func1.3() test/e2e/cloud/gcp/addon_update.go:311 +0x1025from junit_01.xml
[BeforeEach] [sig-cloud-provider-gcp] Addon update set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 20:51:57.721 Nov 26 20:51:57.722: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename addon-update-test 11/26/22 20:51:57.723 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 20:53:36.539 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 20:53:36.667 [BeforeEach] [sig-cloud-provider-gcp] Addon update test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cloud-provider-gcp] Addon update test/e2e/cloud/gcp/addon_update.go:223 [It] should propagate add-on file changes [Slow] test/e2e/cloud/gcp/addon_update.go:244 Nov 26 20:53:37.169: INFO: Executing 'mkdir -p addon-test-dir/addon-update-test-998' on 35.233.174.213:22 Nov 26 20:53:37.340: INFO: Writing remote file 'addon-test-dir/addon-update-test-998/addon-reconcile-controller.yaml' on 35.233.174.213:22 Nov 26 20:53:37.458: INFO: Writing remote file 'addon-test-dir/addon-update-test-998/addon-reconcile-controller-Updated.yaml' on 35.233.174.213:22 Nov 26 20:53:37.576: INFO: Writing remote file 'addon-test-dir/addon-update-test-998/addon-deprecated-label-service.yaml' on 35.233.174.213:22 Nov 26 20:53:37.693: INFO: Writing remote file 'addon-test-dir/addon-update-test-998/addon-deprecated-label-service-updated.yaml' on 35.233.174.213:22 Nov 26 20:53:37.811: INFO: Writing remote file 'addon-test-dir/addon-update-test-998/addon-ensure-exists-service.yaml' on 35.233.174.213:22 Nov 26 20:53:37.929: INFO: Writing remote file 'addon-test-dir/addon-update-test-998/addon-ensure-exists-service-updated.yaml' on 35.233.174.213:22 Nov 26 20:53:38.046: INFO: Writing remote file 'addon-test-dir/addon-update-test-998/invalid-addon-controller.yaml' on 35.233.174.213:22 Nov 26 20:53:38.164: INFO: Executing 'sudo rm -rf /etc/kubernetes/addons/addon-test-dir' on 35.233.174.213:22 Nov 26 20:53:38.265: INFO: Executing 'sudo mkdir -p /etc/kubernetes/addons/addon-test-dir/addon-update-test-998' on 35.233.174.213:22 STEP: copy invalid manifests to the destination dir 11/26/22 20:53:38.354 Nov 26 20:53:38.354: INFO: Executing 'sudo cp addon-test-dir/addon-update-test-998/invalid-addon-controller.yaml /etc/kubernetes/addons/addon-test-dir/addon-update-test-998/invalid-addon-controller.yaml' on 35.233.174.213:22 STEP: copy new manifests 11/26/22 20:53:38.448 Nov 26 20:53:38.448: INFO: Executing 'sudo cp addon-test-dir/addon-update-test-998/addon-reconcile-controller.yaml /etc/kubernetes/addons/addon-test-dir/addon-update-test-998/addon-reconcile-controller.yaml' on 35.233.174.213:22 Nov 26 20:53:38.537: INFO: Executing 'sudo cp addon-test-dir/addon-update-test-998/addon-deprecated-label-service.yaml /etc/kubernetes/addons/addon-test-dir/addon-update-test-998/addon-deprecated-label-service.yaml' on 35.233.174.213:22 Nov 26 20:53:38.630: INFO: Executing 'sudo cp addon-test-dir/addon-update-test-998/addon-ensure-exists-service.yaml /etc/kubernetes/addons/addon-test-dir/addon-update-test-998/addon-ensure-exists-service.yaml' on 35.233.174.213:22 Nov 26 20:53:38.764: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:53:41.820: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:53:44.895: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:53:47.832: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:53:50.821: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:53:53.824: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:53:56.901: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:53:59.902: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:54:02.923: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:54:05.827: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:55:08.808: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://35.233.174.213/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": stream error: stream ID 819; INTERNAL_ERROR; received from peer). Nov 26 20:56:11.832: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (Get "https://35.233.174.213/api/v1/namespaces/kube-system/replicationcontrollers/addon-reconcile-test": stream error: stream ID 821; INTERNAL_ERROR; received from peer). Nov 26 20:57:11.690: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:11.809: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:14.805: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:17.812: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:20.808: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:23.807: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:26.808: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:29.806: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:32.806: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:35.806: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:38.808: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:41.812: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:44.806: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:47.805: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:50.805: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:53.808: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:56.807: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:57:59.806: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:58:02.805: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:58:05.808: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:58:08.821: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:58:11.808: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:58:14.806: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:58:17.806: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:58:20.805: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:58:23.805: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:58:26.808: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:58:29.806: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:58:32.806: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:58:35.805: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). ------------------------------ Progress Report for Ginkgo Process #12 Automatically polling progress: [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow] (Spec Runtime: 6m39.449s) test/e2e/cloud/gcp/addon_update.go:244 In [It] (Node Runtime: 5m0.001s) test/e2e/cloud/gcp/addon_update.go:244 At [By Step] copy new manifests (Step Runtime: 4m58.723s) test/e2e/cloud/gcp/addon_update.go:300 Spec Goroutine goroutine 1309 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc003e28348, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x2bc362c?, 0xc000a79680?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc005228c40?, 0x66e0100?, 0xacfb400?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/cloud/gcp.waitForReplicationController({0x801de88?, 0xc004fd29c0}, {0x75ce977, 0xb}, {0x760025e, 0x14}, 0x1, 0xc00521b1d0?, 0x0?) test/e2e/cloud/gcp/addon_update.go:367 > k8s.io/kubernetes/test/e2e/cloud/gcp.waitForReplicationControllerInAddonTest({0x801de88?, 0xc004fd29c0?}, {0x75ce977?, 0x4?}, {0x760025e?, 0xc003ef5e30?}, 0x1d?) test/e2e/cloud/gcp/addon_update.go:353 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func1.3() test/e2e/cloud/gcp/addon_update.go:311 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc002183380}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:58:38.807: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:58:38.848: INFO: Get ReplicationController addon-reconcile-test in namespace kube-system failed (replicationcontrollers "addon-reconcile-test" not found). Nov 26 20:58:38.848: INFO: Unexpected error: <*errors.errorString | 0xc005068440>: { s: "error waiting for ReplicationController kube-system/addon-reconcile-test to appear: timed out waiting for the condition", } Nov 26 20:58:38.848: FAIL: error waiting for ReplicationController kube-system/addon-reconcile-test to appear: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/cloud/gcp.waitForReplicationControllerInAddonTest({0x801de88?, 0xc004fd29c0?}, {0x75ce977?, 0x4?}, {0x760025e?, 0xc003ef5e30?}, 0x1d?) test/e2e/cloud/gcp/addon_update.go:353 +0x54 k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func1.3() test/e2e/cloud/gcp/addon_update.go:311 +0x1025 Nov 26 20:58:38.848: INFO: Cleaning up ensure exist class addon. Nov 26 20:58:38.898: INFO: Executing 'sudo rm -rf /etc/kubernetes/addons/addon-test-dir' on 35.233.174.213:22 Nov 26 20:58:38.991: INFO: Executing 'rm -rf addon-test-dir' on 35.233.174.213:22 [AfterEach] [sig-cloud-provider-gcp] Addon update test/e2e/framework/node/init/init.go:32 Nov 26 20:58:39.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-cloud-provider-gcp] Addon update test/e2e/cloud/gcp/addon_update.go:237 [DeferCleanup (Each)] [sig-cloud-provider-gcp] Addon update test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cloud-provider-gcp] Addon update dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 20:58:39.212 STEP: Collecting events from namespace "addon-update-test-998". 11/26/22 20:58:39.212 STEP: Found 0 events. 11/26/22 20:58:39.267 Nov 26 20:58:39.312: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 20:58:39.312: INFO: Nov 26 20:58:39.358: INFO: Logging node info for node bootstrap-e2e-master Nov 26 20:58:39.402: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 04b1a5e9-52d6-4a70-89ff-f2505e084f23 6875 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 20:57:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 20:57:02 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 20:57:02 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 20:57:02 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 20:57:02 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.233.174.213,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a44d3cc5e5e4f2535b5861e9b365c743,SystemUUID:a44d3cc5-e5e4-f253-5b58-61e9b365c743,BootID:1d002924-7490-440d-a502-3c6b592d9227,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 20:58:39.404: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 20:58:39.470: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 20:58:39.559: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 20:39:56 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:39.559: INFO: Container kube-addon-manager ready: true, restart count 1 Nov 26 20:58:39.559: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 20:39:56 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:39.559: INFO: Container l7-lb-controller ready: false, restart count 6 Nov 26 20:58:39.559: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:39.559: INFO: Container kube-controller-manager ready: false, restart count 6 Nov 26 20:58:39.559: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:39.559: INFO: Container kube-scheduler ready: true, restart count 6 Nov 26 20:58:39.559: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:39.559: INFO: Container etcd-container ready: true, restart count 1 Nov 26 20:58:39.559: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:39.559: INFO: Container etcd-container ready: true, restart count 6 Nov 26 20:58:39.559: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:39.559: INFO: Container konnectivity-server-container ready: true, restart count 3 Nov 26 20:58:39.559: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:39.559: INFO: Container kube-apiserver ready: true, restart count 0 Nov 26 20:58:39.559: INFO: metadata-proxy-v0.1-cbwjf started at 2022-11-26 20:40:24 +0000 UTC (0+2 container statuses recorded) Nov 26 20:58:39.559: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 20:58:39.559: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 20:58:39.789: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 20:58:39.789: INFO: Logging node info for node bootstrap-e2e-minion-group-01xg Nov 26 20:58:39.832: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-01xg e434adee-5fac-4a09-a6a5-0ccaa4657a2a 6928 0 2022-11-26 20:40:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-01xg kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-01xg topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-4017":"bootstrap-e2e-minion-group-01xg","csi-hostpath-multivolume-6507":"bootstrap-e2e-minion-group-01xg"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 20:49:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 20:56:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 20:57:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-01xg,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 20:56:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 20:56:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 20:56:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 20:56:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 20:56:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 20:56:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 20:56:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.181.19,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-01xg.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-01xg.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5894cd4cef6fe7904a572f6ea0d30d17,SystemUUID:5894cd4c-ef6f-e790-4a57-2f6ea0d30d17,BootID:974ca52a-d66e-4aa0-b46f-a8ab725b8a91,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 20:58:39.833: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-01xg Nov 26 20:58:39.888: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-01xg Nov 26 20:58:39.996: INFO: kube-proxy-bootstrap-e2e-minion-group-01xg started at 2022-11-26 20:40:21 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:39.996: INFO: Container kube-proxy ready: false, restart count 6 Nov 26 20:58:39.996: INFO: coredns-6d97d5ddb-b4rcb started at 2022-11-26 20:40:44 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:39.996: INFO: Container coredns ready: false, restart count 7 Nov 26 20:58:39.996: INFO: csi-mockplugin-0 started at 2022-11-26 20:47:27 +0000 UTC (0+4 container statuses recorded) Nov 26 20:58:39.996: INFO: Container busybox ready: false, restart count 4 Nov 26 20:58:39.996: INFO: Container csi-provisioner ready: false, restart count 4 Nov 26 20:58:39.996: INFO: Container driver-registrar ready: false, restart count 4 Nov 26 20:58:39.996: INFO: Container mock ready: false, restart count 4 Nov 26 20:58:39.996: INFO: csi-mockplugin-0 started at 2022-11-26 20:53:39 +0000 UTC (0+4 container statuses recorded) Nov 26 20:58:39.996: INFO: Container busybox ready: true, restart count 4 Nov 26 20:58:39.996: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 20:58:39.996: INFO: Container driver-registrar ready: false, restart count 4 Nov 26 20:58:39.996: INFO: Container mock ready: false, restart count 4 Nov 26 20:58:39.996: INFO: execpod-dropz47gk started at 2022-11-26 20:42:08 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:39.996: INFO: Container agnhost-container ready: true, restart count 6 Nov 26 20:58:39.996: INFO: csi-mockplugin-0 started at 2022-11-26 20:53:39 +0000 UTC (0+4 container statuses recorded) Nov 26 20:58:39.996: INFO: Container busybox ready: false, restart count 3 Nov 26 20:58:39.996: INFO: Container csi-provisioner ready: false, restart count 5 Nov 26 20:58:39.996: INFO: Container driver-registrar ready: false, restart count 5 Nov 26 20:58:39.996: INFO: Container mock ready: false, restart count 5 Nov 26 20:58:39.996: INFO: csi-hostpathplugin-0 started at 2022-11-26 20:47:28 +0000 UTC (0+7 container statuses recorded) Nov 26 20:58:39.996: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 20:58:39.996: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 20:58:39.996: INFO: Container csi-resizer ready: true, restart count 4 Nov 26 20:58:39.996: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 26 20:58:39.996: INFO: Container hostpath ready: true, restart count 4 Nov 26 20:58:39.996: INFO: Container liveness-probe ready: true, restart count 4 Nov 26 20:58:39.996: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 20:58:39.996: INFO: net-tiers-svc-crlwd started at 2022-11-26 20:41:53 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:39.996: INFO: Container netexec ready: true, restart count 5 Nov 26 20:58:39.996: INFO: pod-subpath-test-dynamicpv-q2l6 started at 2022-11-26 20:42:23 +0000 UTC (1+2 container statuses recorded) Nov 26 20:58:39.996: INFO: Init container init-volume-dynamicpv-q2l6 ready: true, restart count 1 Nov 26 20:58:39.996: INFO: Container test-container-subpath-dynamicpv-q2l6 ready: false, restart count 1 Nov 26 20:58:39.996: INFO: Container test-container-volume-dynamicpv-q2l6 ready: false, restart count 1 Nov 26 20:58:39.996: INFO: ss-2 started at 2022-11-26 20:44:34 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:39.996: INFO: Container webserver ready: true, restart count 5 Nov 26 20:58:39.996: INFO: metadata-proxy-v0.1-h8gjd started at 2022-11-26 20:40:22 +0000 UTC (0+2 container statuses recorded) Nov 26 20:58:39.996: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 20:58:39.996: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 20:58:39.996: INFO: konnectivity-agent-bgjhj started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:39.996: INFO: Container konnectivity-agent ready: false, restart count 7 Nov 26 20:58:39.996: INFO: hostexec-bootstrap-e2e-minion-group-01xg-j6zk4 started at 2022-11-26 20:53:36 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:39.996: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 20:58:39.996: INFO: csi-hostpathplugin-0 started at 2022-11-26 20:47:51 +0000 UTC (0+7 container statuses recorded) Nov 26 20:58:39.996: INFO: Container csi-attacher ready: true, restart count 5 Nov 26 20:58:39.996: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 20:58:39.996: INFO: Container csi-resizer ready: true, restart count 5 Nov 26 20:58:39.996: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 26 20:58:39.996: INFO: Container hostpath ready: true, restart count 5 Nov 26 20:58:39.996: INFO: Container liveness-probe ready: true, restart count 5 Nov 26 20:58:39.996: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 26 20:58:40.303: INFO: Latency metrics for node bootstrap-e2e-minion-group-01xg Nov 26 20:58:40.303: INFO: Logging node info for node bootstrap-e2e-minion-group-6k9m Nov 26 20:58:40.345: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6k9m 767c941b-a788-4a8d-ab83-b3689d62fa87 6917 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6k9m kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6k9m topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1205":"bootstrap-e2e-minion-group-6k9m"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 20:45:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 20:57:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 20:57:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-6k9m,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 20:57:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 20:57:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 20:57:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 20:57:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 20:57:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 20:57:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 20:57:10 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 20:53:53 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.90.109,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6k9m.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6k9m.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:435b7e3e9a4818e01965eecdf511b45e,SystemUUID:435b7e3e-9a48-18e0-1965-eecdf511b45e,BootID:325a5a2f-31cc-40bf-bd7b-a1e8ee820ba8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 20:58:40.346: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6k9m Nov 26 20:58:40.392: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6k9m Nov 26 20:58:40.468: INFO: coredns-6d97d5ddb-l2p8d started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:40.468: INFO: Container coredns ready: false, restart count 7 Nov 26 20:58:40.468: INFO: ss-1 started at 2022-11-26 20:44:34 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:40.468: INFO: Container webserver ready: false, restart count 6 Nov 26 20:58:40.468: INFO: csi-hostpathplugin-0 started at 2022-11-26 20:45:52 +0000 UTC (0+7 container statuses recorded) Nov 26 20:58:40.468: INFO: Container csi-attacher ready: true, restart count 6 Nov 26 20:58:40.468: INFO: Container csi-provisioner ready: true, restart count 6 Nov 26 20:58:40.468: INFO: Container csi-resizer ready: true, restart count 6 Nov 26 20:58:40.468: INFO: Container csi-snapshotter ready: true, restart count 6 Nov 26 20:58:40.468: INFO: Container hostpath ready: true, restart count 6 Nov 26 20:58:40.468: INFO: Container liveness-probe ready: true, restart count 6 Nov 26 20:58:40.468: INFO: Container node-driver-registrar ready: true, restart count 6 Nov 26 20:58:40.468: INFO: metadata-proxy-v0.1-ltr6z started at 2022-11-26 20:40:23 +0000 UTC (0+2 container statuses recorded) Nov 26 20:58:40.468: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 20:58:40.468: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 20:58:40.468: INFO: inclusterclient started at 2022-11-26 20:42:20 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:40.468: INFO: Container inclusterclient ready: false, restart count 0 Nov 26 20:58:40.468: INFO: lb-sourcerange-4vxht started at 2022-11-26 20:42:14 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:40.468: INFO: Container netexec ready: false, restart count 6 Nov 26 20:58:40.468: INFO: csi-mockplugin-0 started at 2022-11-26 20:53:39 +0000 UTC (0+4 container statuses recorded) Nov 26 20:58:40.468: INFO: Container busybox ready: true, restart count 2 Nov 26 20:58:40.468: INFO: Container csi-provisioner ready: false, restart count 2 Nov 26 20:58:40.468: INFO: Container driver-registrar ready: false, restart count 2 Nov 26 20:58:40.468: INFO: Container mock ready: false, restart count 2 Nov 26 20:58:40.468: INFO: konnectivity-agent-dvrb2 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:40.468: INFO: Container konnectivity-agent ready: true, restart count 7 Nov 26 20:58:40.468: INFO: l7-default-backend-8549d69d99-c89m7 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:40.468: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 20:58:40.468: INFO: volume-snapshot-controller-0 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:40.468: INFO: Container volume-snapshot-controller ready: true, restart count 4 Nov 26 20:58:40.468: INFO: kube-proxy-bootstrap-e2e-minion-group-6k9m started at 2022-11-26 20:40:22 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:40.468: INFO: Container kube-proxy ready: true, restart count 7 Nov 26 20:58:40.468: INFO: kube-dns-autoscaler-5f6455f985-mcwh8 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:40.468: INFO: Container autoscaler ready: false, restart count 6 Nov 26 20:58:40.873: INFO: Latency metrics for node bootstrap-e2e-minion-group-6k9m Nov 26 20:58:40.873: INFO: Logging node info for node bootstrap-e2e-minion-group-b1s2 Nov 26 20:58:40.916: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-b1s2 158a2876-cfb9-4447-8a85-01261d1067a0 7007 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-b1s2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-b1s2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1565":"bootstrap-e2e-minion-group-b1s2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 20:53:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 20:57:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 20:57:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-b1s2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 20:57:09 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 20:57:09 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 20:57:09 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 20:57:09 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 20:57:09 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 20:57:09 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 20:57:09 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 20:57:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 20:57:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 20:57:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 20:57:37 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.78.235,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-b1s2.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-b1s2.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:000b31609596f7869267159501efda8f,SystemUUID:000b3160-9596-f786-9267-159501efda8f,BootID:5d963531-fbaa-4a0a-9b1c-e32bb0923886,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-99^e77b887d-6dcb-11ed-aef1-e2bb2c77ff06],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-99^e77b887d-6dcb-11ed-aef1-e2bb2c77ff06,DevicePath:,},},Config:nil,},} Nov 26 20:58:40.917: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-b1s2 Nov 26 20:58:40.961: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-b1s2 Nov 26 20:58:41.058: INFO: execpod-acceptht5sz started at 2022-11-26 20:41:52 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:41.059: INFO: Container agnhost-container ready: false, restart count 5 Nov 26 20:58:41.059: INFO: csi-mockplugin-0 started at 2022-11-26 20:47:27 +0000 UTC (0+3 container statuses recorded) Nov 26 20:58:41.059: INFO: Container csi-provisioner ready: false, restart count 5 Nov 26 20:58:41.059: INFO: Container driver-registrar ready: false, restart count 5 Nov 26 20:58:41.059: INFO: Container mock ready: false, restart count 5 Nov 26 20:58:41.059: INFO: metrics-server-v0.5.2-867b8754b9-xh56x started at 2022-11-26 20:41:00 +0000 UTC (0+2 container statuses recorded) Nov 26 20:58:41.059: INFO: Container metrics-server ready: false, restart count 6 Nov 26 20:58:41.059: INFO: Container metrics-server-nanny ready: false, restart count 7 Nov 26 20:58:41.059: INFO: hostexec-bootstrap-e2e-minion-group-b1s2-q6h9p started at 2022-11-26 20:54:03 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:41.059: INFO: Container agnhost-container ready: true, restart count 3 Nov 26 20:58:41.059: INFO: external-provisioner-rmn9q started at 2022-11-26 20:53:37 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:41.059: INFO: Container nfs-provisioner ready: false, restart count 2 Nov 26 20:58:41.059: INFO: test-hostpath-type-g6dmg started at 2022-11-26 20:53:56 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:41.059: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 20:58:41.059: INFO: csi-mockplugin-attacher-0 started at 2022-11-26 20:43:36 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:41.059: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 20:58:41.059: INFO: csi-mockplugin-attacher-0 started at 2022-11-26 20:47:27 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:41.059: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 20:58:41.059: INFO: kube-proxy-bootstrap-e2e-minion-group-b1s2 started at 2022-11-26 20:40:22 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:41.059: INFO: Container kube-proxy ready: false, restart count 6 Nov 26 20:58:41.059: INFO: konnectivity-agent-q4nqj started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:41.059: INFO: Container konnectivity-agent ready: false, restart count 6 Nov 26 20:58:41.059: INFO: csi-mockplugin-0 started at 2022-11-26 20:43:36 +0000 UTC (0+3 container statuses recorded) Nov 26 20:58:41.059: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 20:58:41.059: INFO: Container driver-registrar ready: true, restart count 5 Nov 26 20:58:41.059: INFO: Container mock ready: true, restart count 5 Nov 26 20:58:41.059: INFO: ss-0 started at 2022-11-26 20:43:11 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:41.059: INFO: Container webserver ready: false, restart count 7 Nov 26 20:58:41.059: INFO: pvc-volume-tester-xtzsl started at 2022-11-26 20:52:28 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:41.059: INFO: Container volume-tester ready: false, restart count 0 Nov 26 20:58:41.059: INFO: reallocate-nodeport-test-rhdnv started at 2022-11-26 20:49:59 +0000 UTC (0+1 container statuses recorded) Nov 26 20:58:41.059: INFO: Container netexec ready: true, restart count 4 Nov 26 20:58:41.059: INFO: metadata-proxy-v0.1-6l49k started at 2022-11-26 20:40:23 +0000 UTC (0+2 container statuses recorded) Nov 26 20:58:41.059: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 20:58:41.059: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 20:58:41.339: INFO: Latency metrics for node bootstrap-e2e-minion-group-b1s2 [DeferCleanup (Each)] [sig-cloud-provider-gcp] Addon update tear down framework | framework.go:193 STEP: Destroying namespace "addon-update-test-998" for this suite. 11/26/22 20:58:41.34
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\shandle\supdates\sto\sExternalTrafficPolicy\sfield$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002a6620, {0x75c6f7c, 0x9}, 0xc00271b3b0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002a6620, 0x7fbda0b04a30?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002a6620, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 +0x417from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 20:41:54.276 Nov 26 20:41:54.276: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 20:41:54.278 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 20:41:54.41 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 20:41:54.494 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should handle updates to ExternalTrafficPolicy field test/e2e/network/loadbalancer.go:1480 STEP: creating a service esipp-2699/external-local-update with type=LoadBalancer 11/26/22 20:41:54.715 STEP: setting ExternalTrafficPolicy=Local 11/26/22 20:41:54.716 STEP: waiting for loadbalancer for service esipp-2699/external-local-update 11/26/22 20:41:54.773 Nov 26 20:41:54.773: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer STEP: creating a pod to be part of the service external-local-update 11/26/22 20:43:06.855 Nov 26 20:43:06.903: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 20:43:06.961: INFO: Found all 1 pods Nov 26 20:43:06.961: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-update-sg9sz] Nov 26 20:43:06.961: INFO: Waiting up to 2m0s for pod "external-local-update-sg9sz" in namespace "esipp-2699" to be "running and ready" Nov 26 20:43:07.002: INFO: Pod "external-local-update-sg9sz": Phase="Pending", Reason="", readiness=false. Elapsed: 41.427616ms Nov 26 20:43:07.003: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-sg9sz' on 'bootstrap-e2e-minion-group-6k9m' to be 'Running' but was 'Pending' Nov 26 20:43:09.045: INFO: Pod "external-local-update-sg9sz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083628472s Nov 26 20:43:09.045: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-sg9sz' on 'bootstrap-e2e-minion-group-6k9m' to be 'Running' but was 'Pending' Nov 26 20:43:11.046: INFO: Pod "external-local-update-sg9sz": Phase="Running", Reason="", readiness=true. Elapsed: 4.084968025s Nov 26 20:43:11.046: INFO: Pod "external-local-update-sg9sz" satisfied condition "running and ready" Nov 26 20:43:11.046: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-update-sg9sz] STEP: waiting for loadbalancer for service esipp-2699/external-local-update 11/26/22 20:43:11.046 Nov 26 20:43:11.046: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer STEP: turning ESIPP off 11/26/22 20:43:11.087 STEP: Performing setup for networking test in namespace esipp-2699 11/26/22 20:43:12.437 STEP: creating a selector 11/26/22 20:43:12.437 STEP: Creating the service pods in kubernetes 11/26/22 20:43:12.437 Nov 26 20:43:12.437: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 26 20:43:12.823: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-2699" to be "running and ready" Nov 26 20:43:12.874: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 50.870916ms Nov 26 20:43:12.874: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 20:43:14.916: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09342075s Nov 26 20:43:14.916: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 20:43:16.931: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.108217311s Nov 26 20:43:16.931: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 20:43:18.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.111771948s Nov 26 20:43:18.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 20:43:20.918: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.094812793s Nov 26 20:43:20.918: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 20:43:22.918: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.094672114s Nov 26 20:43:22.918: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 20:43:24.918: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.094749834s Nov 26 20:43:24.918: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 20:43:26.917: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.094182731s Nov 26 20:43:26.917: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 20:43:28.922: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.09955153s Nov 26 20:43:28.922: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 20:43:30.969: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.146556762s Nov 26 20:43:30.969: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 20:43:32.963: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.139698755s Nov 26 20:43:32.963: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 20:43:34.948: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.125340013s Nov 26 20:43:34.948: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 26 20:43:34.948: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 26 20:43:35.018: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-2699" to be "running and ready" Nov 26 20:43:35.073: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 54.746913ms Nov 26 20:43:35.073: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:43:37.137: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.118737108s Nov 26 20:43:37.137: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:43:39.124: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.105999163s Nov 26 20:43:39.124: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:43:41.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 6.098979184s Nov 26 20:43:41.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:43:43.144: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 8.125435673s Nov 26 20:43:43.144: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:43:45.118: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 10.099806881s Nov 26 20:43:45.118: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:43:47.115: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 12.097053857s Nov 26 20:43:47.115: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:43:49.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 14.097399985s Nov 26 20:43:49.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:43:51.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 16.098576336s Nov 26 20:43:51.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:43:53.115: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 18.096909081s Nov 26 20:43:53.115: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:43:55.115: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 20.097152253s Nov 26 20:43:55.115: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:43:57.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 22.097735921s Nov 26 20:43:57.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:43:59.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 24.097909108s Nov 26 20:43:59.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:01.115: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 26.096830093s Nov 26 20:44:01.115: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:03.115: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 28.096682189s Nov 26 20:44:03.115: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:05.119: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 30.100955174s Nov 26 20:44:05.119: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:07.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 32.097787586s Nov 26 20:44:07.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:09.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 34.098745501s Nov 26 20:44:09.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:11.155: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 36.136574232s Nov 26 20:44:11.155: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:13.124: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 38.105660034s Nov 26 20:44:13.124: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:15.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 40.09853673s Nov 26 20:44:15.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:17.119: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 42.100251563s Nov 26 20:44:17.119: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:19.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 44.098603153s Nov 26 20:44:19.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:21.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 46.099118094s Nov 26 20:44:21.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:23.120: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 48.101656471s Nov 26 20:44:23.120: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:25.115: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 50.096707804s Nov 26 20:44:25.115: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:27.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 52.098879546s Nov 26 20:44:27.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:29.130: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 54.11173373s Nov 26 20:44:29.130: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:31.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 56.097971239s Nov 26 20:44:31.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:33.118: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 58.099728724s Nov 26 20:44:33.118: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:35.124: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.105925799s Nov 26 20:44:35.124: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:37.115: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.096347749s Nov 26 20:44:37.115: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:44:39.194: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.176069502s Nov 26 20:44:39.194: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:08.344: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m33.325321665s Nov 26 20:45:08.344: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:09.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.097424418s Nov 26 20:45:09.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:11.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.097243324s Nov 26 20:45:11.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:13.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.097903409s Nov 26 20:45:13.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:15.121: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.102726947s Nov 26 20:45:15.121: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:17.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.09876541s Nov 26 20:45:17.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:19.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.098250508s Nov 26 20:45:19.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:21.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.099035207s Nov 26 20:45:21.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:23.118: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.099830487s Nov 26 20:45:23.118: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:25.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.098007733s Nov 26 20:45:25.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:27.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.09845838s Nov 26 20:45:27.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:29.119: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.100967611s Nov 26 20:45:29.119: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:31.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.099194586s Nov 26 20:45:31.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:33.118: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.100072542s Nov 26 20:45:33.118: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:35.115: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.096765668s Nov 26 20:45:35.115: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:37.118: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.099824066s Nov 26 20:45:37.118: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:39.119: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.100366328s Nov 26 20:45:39.119: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:41.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.09746739s Nov 26 20:45:41.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:43.115: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.097073377s Nov 26 20:45:43.115: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:45.115: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.09676359s Nov 26 20:45:45.115: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:47.115: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.096399112s Nov 26 20:45:47.115: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:49.115: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.096752622s Nov 26 20:45:49.115: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:51.123: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.104968345s Nov 26 20:45:51.123: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:53.119: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.100224354s Nov 26 20:45:53.119: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:55.141: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.123060239s Nov 26 20:45:55.141: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:57.248: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.230185555s Nov 26 20:45:57.248: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:45:59.210: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.19152855s Nov 26 20:45:59.210: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:46:44.183: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m9.1648545s Nov 26 20:46:44.183: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:46:45.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m10.09771149s Nov 26 20:46:45.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:46:47.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.098733414s Nov 26 20:46:47.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:46:49.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.098199242s Nov 26 20:46:49.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:46:51.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.097389566s Nov 26 20:46:51.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:46:53.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.099029168s Nov 26 20:46:53.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 5m0.392s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:1480 At [By Step] Creating the service pods in kubernetes (Step Runtime: 3m42.231s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 738 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc002cc84c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x68?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000a8efb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000598b60}, {0xc001ceda70, 0xa}, {0xc002b8e023, 0xb}, {0x75ee704, 0x11}, 0xc000fe7800?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000598b60?}, {0xc002b8e023?, 0x0?}, {0xc001ceda70?, 0x0?}, 0xc0001a18e0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002a6620, {0x75c6f7c, 0x9}, 0xc00271b3b0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002a6620, 0x7fbda0b04a30?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002a6620, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc002182780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:46:55.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.098417195s Nov 26 20:46:55.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:46:57.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.09797852s Nov 26 20:46:57.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:46:59.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.098274361s Nov 26 20:46:59.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:47:01.115: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.097168017s Nov 26 20:47:01.115: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:47:03.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.097241511s Nov 26 20:47:03.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:47:05.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.098259501s Nov 26 20:47:05.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:47:07.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.098149041s Nov 26 20:47:07.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:47:09.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.098226663s Nov 26 20:47:09.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:47:11.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.098595705s Nov 26 20:47:11.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 20:47:13.118: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.099660209s Nov 26 20:47:13.118: INFO: The phase of Pod netserver-1 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 5m20.394s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 5m20.004s) test/e2e/network/loadbalancer.go:1480 At [By Step] Creating the service pods in kubernetes (Step Runtime: 4m2.233s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 738 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc002cc84c8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x68?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000a8efb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000598b60}, {0xc001ceda70, 0xa}, {0xc002b8e023, 0xb}, {0x75ee704, 0x11}, 0xc000fe7800?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000598b60?}, {0xc002b8e023?, 0x0?}, {0xc001ceda70?, 0x0?}, 0xc0001a18e0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002a6620, {0x75c6f7c, 0x9}, 0xc00271b3b0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002a6620, 0x7fbda0b04a30?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002a6620, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc002182780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:47:15.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 3m40.097548572s Nov 26 20:47:15.116: INFO: The phase of Pod netserver-1 is Running (Ready = true) Nov 26 20:47:15.116: INFO: Pod "netserver-1" satisfied condition "running and ready" Nov 26 20:47:15.162: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "esipp-2699" to be "running and ready" Nov 26 20:47:15.204: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 42.050572ms Nov 26 20:47:15.204: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:17.246: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2.084276882s Nov 26 20:47:17.246: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:19.246: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 4.084923191s Nov 26 20:47:19.246: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:21.246: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 6.08465205s Nov 26 20:47:21.246: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:23.246: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 8.084553335s Nov 26 20:47:23.246: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:25.279: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 10.117598591s Nov 26 20:47:25.279: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:27.310: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 12.148146695s Nov 26 20:47:27.310: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:30.074: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 14.912509391s Nov 26 20:47:30.074: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:31.303: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 16.141960618s Nov 26 20:47:31.304: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:33.262: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 18.100897154s Nov 26 20:47:33.262: INFO: The phase of Pod netserver-2 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 5m40.396s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 5m40.005s) test/e2e/network/loadbalancer.go:1480 At [By Step] Creating the service pods in kubernetes (Step Runtime: 4m22.235s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 738 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc002a1ebb8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x68?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000a8efb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000598b60}, {0xc001ceda70, 0xa}, {0xc002b8e2a3, 0xb}, {0x75ee704, 0x11}, 0xc0010b77e0?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000598b60?}, {0xc002b8e2a3?, 0x0?}, {0xc001ceda70?, 0x0?}, 0xc0001a18e0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002a6620, {0x75c6f7c, 0x9}, 0xc00271b3b0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002a6620, 0x7fbda0b04a30?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002a6620, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc002182780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:47:35.355: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 20.193147462s Nov 26 20:47:35.355: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:37.279: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 22.11730013s Nov 26 20:47:37.279: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:39.298: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 24.13696691s Nov 26 20:47:39.299: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:41.273: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 26.111200166s Nov 26 20:47:41.273: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:43.278: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 28.116961413s Nov 26 20:47:43.279: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:45.335: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 30.17366347s Nov 26 20:47:45.335: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:47.273: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 32.111882685s Nov 26 20:47:47.273: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:49.270: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 34.108844728s Nov 26 20:47:49.270: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:51.270: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 36.1083789s Nov 26 20:47:51.270: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 20:47:53.310: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 38.148404908s Nov 26 20:47:53.310: INFO: The phase of Pod netserver-2 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 6m0.398s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 6m0.007s) test/e2e/network/loadbalancer.go:1480 At [By Step] Creating the service pods in kubernetes (Step Runtime: 4m42.237s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 738 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc002a1ebb8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x68?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000a8efb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000598b60}, {0xc001ceda70, 0xa}, {0xc002b8e2a3, 0xb}, {0x75ee704, 0x11}, 0xc0010b77e0?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000598b60?}, {0xc002b8e2a3?, 0x0?}, {0xc001ceda70?, 0x0?}, 0xc0001a18e0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002a6620, {0x75c6f7c, 0x9}, 0xc00271b3b0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002a6620, 0x7fbda0b04a30?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002a6620, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc002182780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:47:55.281: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 40.119975941s Nov 26 20:47:55.282: INFO: The phase of Pod netserver-2 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 6m20.401s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 6m20.01s) test/e2e/network/loadbalancer.go:1480 At [By Step] Creating the service pods in kubernetes (Step Runtime: 5m2.24s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 738 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a7b680, 0xc002744400) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003a03180, 0xc002744400, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003f3a000?}, 0xc002744400?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003f3a000, 0xc002744400) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc000aa21e0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc001db7200, 0xc002744300) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc000f1b800, 0xc002744200) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc002744200, {0x7fad100, 0xc000f1b800}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc001db7230, 0xc002744200, {0x7fbdccc6df18?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc001db7230, 0xc002744200) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc002744000, {0x7fe0bc8, 0xc000136008}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc002744000, {0x7fe0bc8, 0xc000136008}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc0013c0020, {0x7fe0bc8, 0xc000136008}, {0xc002b8e2a3, 0xb}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition.func1() test/e2e/framework/pod/wait.go:291 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0xae75720?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc002a1ebb8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x68?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000a8efb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000598b60}, {0xc001ceda70, 0xa}, {0xc002b8e2a3, 0xb}, {0x75ee704, 0x11}, 0xc0010b77e0?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000598b60?}, {0xc002b8e2a3?, 0x0?}, {0xc001ceda70?, 0x0?}, 0xc0001a18e0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002a6620, {0x75c6f7c, 0x9}, 0xc00271b3b0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002a6620, 0x7fbda0b04a30?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002a6620, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc002182780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 6m40.404s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 6m40.013s) test/e2e/network/loadbalancer.go:1480 At [By Step] Creating the service pods in kubernetes (Step Runtime: 5m22.242s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 738 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a7b680, 0xc002744400) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003a03180, 0xc002744400, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003f3a000?}, 0xc002744400?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003f3a000, 0xc002744400) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc000aa21e0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc001db7200, 0xc002744300) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc000f1b800, 0xc002744200) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc002744200, {0x7fad100, 0xc000f1b800}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc001db7230, 0xc002744200, {0x7fbdccc6df18?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc001db7230, 0xc002744200) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc002744000, {0x7fe0bc8, 0xc000136008}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc002744000, {0x7fe0bc8, 0xc000136008}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc0013c0020, {0x7fe0bc8, 0xc000136008}, {0xc002b8e2a3, 0xb}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition.func1() test/e2e/framework/pod/wait.go:291 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0xae75720?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc002a1ebb8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x68?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000a8efb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000598b60}, {0xc001ceda70, 0xa}, {0xc002b8e2a3, 0xb}, {0x75ee704, 0x11}, 0xc0010b77e0?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000598b60?}, {0xc002b8e2a3?, 0x0?}, {0xc001ceda70?, 0x0?}, 0xc0001a18e0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002a6620, {0x75c6f7c, 0x9}, 0xc00271b3b0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002a6620, 0x7fbda0b04a30?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002a6620, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc002182780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 7m0.406s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 7m0.015s) test/e2e/network/loadbalancer.go:1480 At [By Step] Creating the service pods in kubernetes (Step Runtime: 5m42.245s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 738 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a7b680, 0xc002744400) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003a03180, 0xc002744400, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003f3a000?}, 0xc002744400?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003f3a000, 0xc002744400) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc000aa21e0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc001db7200, 0xc002744300) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc000f1b800, 0xc002744200) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc002744200, {0x7fad100, 0xc000f1b800}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc001db7230, 0xc002744200, {0x7fbdccc6df18?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc001db7230, 0xc002744200) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc002744000, {0x7fe0bc8, 0xc000136008}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc002744000, {0x7fe0bc8, 0xc000136008}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc0013c0020, {0x7fe0bc8, 0xc000136008}, {0xc002b8e2a3, 0xb}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition.func1() test/e2e/framework/pod/wait.go:291 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0xae75720?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc002a1ebb8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x68?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000a8efb8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc000598b60}, {0xc001ceda70, 0xa}, {0xc002b8e2a3, 0xb}, {0x75ee704, 0x11}, 0xc0010b77e0?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc000598b60?}, {0xc002b8e2a3?, 0x0?}, {0xc001ceda70?, 0x0?}, 0xc0001a18e0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002a6620, {0x75c6f7c, 0x9}, 0xc00271b3b0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002a6620, 0x7fbda0b04a30?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002a6620, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc002182780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 20:48:57.246: INFO: Encountered non-retryable error while getting pod esipp-2699/netserver-2: Get "https://35.233.174.213/api/v1/namespaces/esipp-2699/pods/netserver-2": stream error: stream ID 415; INTERNAL_ERROR; received from peer Nov 26 20:48:57.246: INFO: Unexpected error: <*fmt.wrapError | 0xc002a34580>: { msg: "error while waiting for pod esipp-2699/netserver-2 to be running and ready: Get \"https://35.233.174.213/api/v1/namespaces/esipp-2699/pods/netserver-2\": stream error: stream ID 415; INTERNAL_ERROR; received from peer", err: <*url.Error | 0xc00257a420>{ Op: "Get", URL: "https://35.233.174.213/api/v1/namespaces/esipp-2699/pods/netserver-2", Err: <http2.StreamError>{ StreamID: 415, Code: 2, Cause: <*errors.errorString | 0xc0000d18d0>{ s: "received from peer", }, }, }, } Nov 26 20:48:57.247: FAIL: error while waiting for pod esipp-2699/netserver-2 to be running and ready: Get "https://35.233.174.213/api/v1/namespaces/esipp-2699/pods/netserver-2": stream error: stream ID 415; INTERNAL_ERROR; received from peer Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002a6620, {0x75c6f7c, 0x9}, 0xc00271b3b0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002a6620, 0x7fbda0b04a30?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002a6620, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 +0x417 Nov 26 20:49:04.911: INFO: Waiting up to 15m0s for service "external-local-update" to have no LoadBalancer ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field (Spec Runtime: 7m20.41s) test/e2e/network/loadbalancer.go:1480 In [It] (Node Runtime: 7m20.019s) test/e2e/network/loadbalancer.go:1480 At [By Step] Creating the service pods in kubernetes (Step Runtime: 6m2.248s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 738 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc001fe44b0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x20?, 0x2fd9d05?, 0x48?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc000136000}, 0xc000a8ea80?, 0xc000a8ea70?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x7fffe1c244fd?, 0xa?, 0x7fe0bc8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 k8s.io/kubernetes/test/e2e/framework/providers/gce.(*Provider).EnsureLoadBalancerResourcesDeleted(0xc000125698, {0xc001b723a0, 0xd}, {0x77c6ae2, 0x2}) test/e2e/framework/providers/gce/gce.go:195 k8s.io/kubernetes/test/e2e/framework.EnsureLoadBalancerResourcesDeleted(...) test/e2e/framework/util.go:551 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancerDestroy.func1() test/e2e/framework/service/jig.go:602 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancerDestroy(0xc00206cfa0, {0xc001b723a0?, 0x0?}, 0x0?, 0x0?) test/e2e/framework/service/jig.go:614 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).ChangeServiceType(0x0?, {0x75c5095?, 0x0?}, 0x0?) test/e2e/framework/service/jig.go:186 > k8s.io/kubernetes/test/e2e/network.glob..func20.7.1() test/e2e/network/loadbalancer.go:1494 panic({0x70eb7e0, 0xc00032f500}) /usr/local/go/src/runtime/panic.go:884 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc0000925a0, 0xec}, {0xc000a8f048?, 0x75b521a?, 0xc000a8f068?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00019c1c0, 0xd7}, {0xc000a8f0e0?, 0xc00019c1c0?, 0xc000a8f108?}) test/e2e/framework/log.go:61 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3f20, 0xc002a34580}, {0x0?, 0xc001ceda70?, 0x0?}) test/e2e/framework/expect.go:76 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc0002a6620, {0x75c6f7c, 0x9}, 0xc00271b3b0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc0002a6620, 0x7fbda0b04a30?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc0002a6620, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591be, 0xc002182780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 20:49:15.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 20:49:15.215: INFO: Output of kubectl describe svc: Nov 26 20:49:15.216: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=esipp-2699 describe svc --namespace=esipp-2699' Nov 26 20:49:15.539: INFO: stderr: "" Nov 26 20:49:15.539: INFO: stdout: "Name: external-local-update\nNamespace: esipp-2699\nLabels: testid=external-local-update-2016b2a9-f9b1-42a0-93ce-403b4746517f\nAnnotations: <none>\nSelector: testid=external-local-update-2016b2a9-f9b1-42a0-93ce-403b4746517f\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.5.225\nIPs: 10.0.5.225\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nEndpoints: 10.64.2.87:80\nSession Affinity: None\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 6m46s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 6m9s service-controller Ensured load balancer\n Normal ExternalTrafficPolicy 6m4s service-controller Local -> Cluster\n Normal EnsuringLoadBalancer 3m25s service-controller Ensuring load balancer\n" Nov 26 20:49:15.539: INFO: Name: external-local-update Namespace: esipp-2699 Labels: testid=external-local-update-2016b2a9-f9b1-42a0-93ce-403b4746517f Annotations: <none> Selector: testid=external-local-update-2016b2a9-f9b1-42a0-93ce-403b4746517f Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.5.225 IPs: 10.0.5.225 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.64.2.87:80 Session Affinity: None Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 6m46s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 6m9s service-controller Ensured load balancer Normal ExternalTrafficPolicy 6m4s service-controller Local -> Cluster Normal EnsuringLoadBalancer 3m25s service-controller Ensuring load balancer [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 20:49:15.539 STEP: Collecting events from namespace "esipp-2699". 11/26/22 20:49:15.539 STEP: Found 35 events. 11/26/22 20:49:15.59 Nov 26 20:49:15.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for external-local-update-sg9sz: { } Scheduled: Successfully assigned esipp-2699/external-local-update-sg9sz to bootstrap-e2e-minion-group-6k9m Nov 26 20:49:15.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-0: { } Scheduled: Successfully assigned esipp-2699/netserver-0 to bootstrap-e2e-minion-group-01xg Nov 26 20:49:15.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-1: { } Scheduled: Successfully assigned esipp-2699/netserver-1 to bootstrap-e2e-minion-group-6k9m Nov 26 20:49:15.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-2: { } Scheduled: Successfully assigned esipp-2699/netserver-2 to bootstrap-e2e-minion-group-b1s2 Nov 26 20:49:15.590: INFO: At 2022-11-26 20:42:29 +0000 UTC - event for external-local-update: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:06 +0000 UTC - event for external-local-update: {replication-controller } SuccessfulCreate: Created pod: external-local-update-sg9sz Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:06 +0000 UTC - event for external-local-update: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:07 +0000 UTC - event for external-local-update-sg9sz: {kubelet bootstrap-e2e-minion-group-6k9m} Started: Started container netexec Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:07 +0000 UTC - event for external-local-update-sg9sz: {kubelet bootstrap-e2e-minion-group-6k9m} Created: Created container netexec Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:07 +0000 UTC - event for external-local-update-sg9sz: {kubelet bootstrap-e2e-minion-group-6k9m} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:10 +0000 UTC - event for external-local-update-sg9sz: {kubelet bootstrap-e2e-minion-group-6k9m} Unhealthy: Readiness probe failed: Get "http://10.64.2.47:80/hostName": read tcp 10.64.2.1:59644->10.64.2.47:80: read: connection reset by peer Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:10 +0000 UTC - event for external-local-update-sg9sz: {kubelet bootstrap-e2e-minion-group-6k9m} Killing: Stopping container netexec Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:11 +0000 UTC - event for external-local-update: {service-controller } ExternalTrafficPolicy: Local -> Cluster Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:11 +0000 UTC - event for external-local-update-sg9sz: {kubelet bootstrap-e2e-minion-group-6k9m} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:13 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-01xg} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-xfq2c" : failed to sync configmap cache: timed out waiting for the condition Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:13 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6k9m} Started: Started container webserver Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:13 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6k9m} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:13 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6k9m} Created: Created container webserver Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:14 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6k9m} Killing: Stopping container webserver Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:15 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-01xg} Started: Started container webserver Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:15 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-01xg} Created: Created container webserver Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:15 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-01xg} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:15 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6k9m} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:15 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-b1s2} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:16 +0000 UTC - event for external-local-update-sg9sz: {kubelet bootstrap-e2e-minion-group-6k9m} BackOff: Back-off restarting failed container netexec in pod external-local-update-sg9sz_esipp-2699(7c294394-1a0c-4c88-80a0-28a0f5c1812b) Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:16 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-b1s2} Created: Created container webserver Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:16 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-b1s2} Started: Started container webserver Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:17 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-b1s2} Killing: Stopping container webserver Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:18 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6k9m} BackOff: Back-off restarting failed container webserver in pod netserver-1_esipp-2699(ec0a898c-2ce9-4c96-a6d7-01157df69c09) Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:18 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-b1s2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 20:49:15.590: INFO: At 2022-11-26 20:43:25 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-b1s2} BackOff: Back-off restarting failed container webserver in pod netserver-2_esipp-2699(690c29fc-8ab7-493e-b77c-7ed3afd7fec4) Nov 26 20:49:15.590: INFO: At 2022-11-26 20:44:53 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-01xg} Killing: Stopping container webserver Nov 26 20:49:15.590: INFO: At 2022-11-26 20:44:54 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-01xg} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 20:49:15.590: INFO: At 2022-11-26 20:45:05 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-01xg} BackOff: Back-off restarting failed container webserver in pod netserver-0_esipp-2699(85c9d0e8-3302-425b-a4d0-04632cade4ac) Nov 26 20:49:15.590: INFO: At 2022-11-26 20:45:50 +0000 UTC - event for external-local-update: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 20:49:15.632: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 20:49:15.632: INFO: external-local-update-sg9sz bootstrap-e2e-minion-group-6k9m Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:47:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:47:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:06 +0000 UTC }] Nov 26 20:49:15.632: INFO: netserver-0 bootstrap-e2e-minion-group-01xg Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:48:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:48:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:12 +0000 UTC }] Nov 26 20:49:15.632: INFO: netserver-1 bootstrap-e2e-minion-group-6k9m Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:47:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:47:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:12 +0000 UTC }] Nov 26 20:49:15.632: INFO: netserver-2 bootstrap-e2e-minion-group-b1s2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:46:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:46:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 20:43:12 +0000 UTC }] Nov 26 20:49:15.632: INFO: Nov 26 20:49:15.882: INFO: Logging node info for node bootstrap-e2e-master Nov 26 20:49:15.925: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 04b1a5e9-52d6-4a70-89ff-f2505e084f23 4001 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 20:45:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 20:45:49 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 20:45:49 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 20:45:49 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 20:45:49 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.233.174.213,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a44d3cc5e5e4f2535b5861e9b365c743,SystemUUID:a44d3cc5-e5e4-f253-5b58-61e9b365c743,BootID:1d002924-7490-440d-a502-3c6b592d9227,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 20:49:15.925: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 20:49:15.970: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 20:49:16.032: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.032: INFO: Container kube-apiserver ready: true, restart count 0 Nov 26 20:49:16.032: INFO: metadata-proxy-v0.1-cbwjf started at 2022-11-26 20:40:24 +0000 UTC (0+2 container statuses recorded) Nov 26 20:49:16.032: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 20:49:16.032: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 20:49:16.032: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.032: INFO: Container etcd-container ready: true, restart count 4 Nov 26 20:49:16.032: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.032: INFO: Container konnectivity-server-container ready: true, restart count 1 Nov 26 20:49:16.032: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 20:39:56 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.032: INFO: Container kube-addon-manager ready: true, restart count 1 Nov 26 20:49:16.032: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 20:39:56 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.032: INFO: Container l7-lb-controller ready: false, restart count 4 Nov 26 20:49:16.032: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.032: INFO: Container kube-controller-manager ready: false, restart count 4 Nov 26 20:49:16.032: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.032: INFO: Container kube-scheduler ready: true, restart count 4 Nov 26 20:49:16.032: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.032: INFO: Container etcd-container ready: true, restart count 0 Nov 26 20:49:16.217: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 20:49:16.217: INFO: Logging node info for node bootstrap-e2e-minion-group-01xg Nov 26 20:49:16.260: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-01xg e434adee-5fac-4a09-a6a5-0ccaa4657a2a 5320 0 2022-11-26 20:40:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-01xg kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-01xg topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-4017":"bootstrap-e2e-minion-group-01xg","csi-hostpath-multivolume-6507":"bootstrap-e2e-minion-group-01xg","csi-hostpath-provisioning-2859":"bootstrap-e2e-minion-group-01xg","csi-mock-csi-mock-volumes-674":"csi-mock-csi-mock-volumes-674"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 20:45:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 20:47:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 20:49:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-01xg,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 20:45:27 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 20:45:27 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 20:45:27 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 20:45:27 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 20:45:27 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 20:45:27 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 20:45:27 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 20:47:54 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 20:47:54 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 20:47:54 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 20:47:54 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.181.19,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-01xg.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-01xg.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5894cd4cef6fe7904a572f6ea0d30d17,SystemUUID:5894cd4c-ef6f-e790-4a57-2f6ea0d30d17,BootID:974ca52a-d66e-4aa0-b46f-a8ab725b8a91,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-6507^8ccadda4-6dcb-11ed-96e5-be7889f80e72],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-6507^8ccadda4-6dcb-11ed-96e5-be7889f80e72,DevicePath:,},},Config:nil,},} Nov 26 20:49:16.260: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-01xg Nov 26 20:49:16.311: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-01xg Nov 26 20:49:16.408: INFO: pod-subpath-test-dynamicpv-q2l6 started at 2022-11-26 20:42:23 +0000 UTC (1+2 container statuses recorded) Nov 26 20:49:16.408: INFO: Init container init-volume-dynamicpv-q2l6 ready: true, restart count 1 Nov 26 20:49:16.408: INFO: Container test-container-subpath-dynamicpv-q2l6 ready: false, restart count 1 Nov 26 20:49:16.408: INFO: Container test-container-volume-dynamicpv-q2l6 ready: false, restart count 1 Nov 26 20:49:16.408: INFO: test-container-pod started at 2022-11-26 20:47:06 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.408: INFO: Container webserver ready: true, restart count 1 Nov 26 20:49:16.408: INFO: hostexec-bootstrap-e2e-minion-group-01xg-nm88j started at 2022-11-26 20:47:25 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.408: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 20:49:16.408: INFO: csi-hostpathplugin-0 started at 2022-11-26 20:47:28 +0000 UTC (0+7 container statuses recorded) Nov 26 20:49:16.408: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 20:49:16.408: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 20:49:16.408: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 20:49:16.408: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 20:49:16.408: INFO: Container hostpath ready: true, restart count 2 Nov 26 20:49:16.408: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 20:49:16.408: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 26 20:49:16.408: INFO: netserver-0 started at 2022-11-26 20:43:07 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.408: INFO: Container webserver ready: false, restart count 4 Nov 26 20:49:16.408: INFO: net-tiers-svc-crlwd started at 2022-11-26 20:41:53 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.408: INFO: Container netexec ready: true, restart count 2 Nov 26 20:49:16.408: INFO: csi-hostpathplugin-0 started at 2022-11-26 20:42:59 +0000 UTC (0+7 container statuses recorded) Nov 26 20:49:16.408: INFO: Container csi-attacher ready: true, restart count 1 Nov 26 20:49:16.408: INFO: Container csi-provisioner ready: true, restart count 1 Nov 26 20:49:16.408: INFO: Container csi-resizer ready: true, restart count 1 Nov 26 20:49:16.408: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 26 20:49:16.408: INFO: Container hostpath ready: true, restart count 1 Nov 26 20:49:16.408: INFO: Container liveness-probe ready: true, restart count 1 Nov 26 20:49:16.408: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 26 20:49:16.408: INFO: csi-hostpathplugin-0 started at 2022-11-26 20:47:51 +0000 UTC (0+7 container statuses recorded) Nov 26 20:49:16.408: INFO: Container csi-attacher ready: true, restart count 1 Nov 26 20:49:16.408: INFO: Container csi-provisioner ready: true, restart count 1 Nov 26 20:49:16.408: INFO: Container csi-resizer ready: true, restart count 1 Nov 26 20:49:16.408: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 26 20:49:16.408: INFO: Container hostpath ready: true, restart count 1 Nov 26 20:49:16.408: INFO: Container liveness-probe ready: true, restart count 1 Nov 26 20:49:16.408: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 26 20:49:16.408: INFO: ss-2 started at 2022-11-26 20:44:34 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.408: INFO: Container webserver ready: false, restart count 2 Nov 26 20:49:16.408: INFO: pod-2b51285b-3ebd-4075-841a-ef9ea380db3f started at 2022-11-26 20:47:28 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.408: INFO: Container write-pod ready: false, restart count 0 Nov 26 20:49:16.408: INFO: metadata-proxy-v0.1-h8gjd started at 2022-11-26 20:40:22 +0000 UTC (0+2 container statuses recorded) Nov 26 20:49:16.408: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 20:49:16.408: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 20:49:16.408: INFO: konnectivity-agent-bgjhj started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.408: INFO: Container konnectivity-agent ready: false, restart count 6 Nov 26 20:49:16.408: INFO: external-provisioner-5dqdq started at 2022-11-26 20:42:12 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.408: INFO: Container nfs-provisioner ready: true, restart count 4 Nov 26 20:49:16.408: INFO: nfs-server started at 2022-11-26 20:45:55 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.408: INFO: Container nfs-server ready: true, restart count 1 Nov 26 20:49:16.408: INFO: netserver-0 started at 2022-11-26 20:43:12 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.408: INFO: Container webserver ready: true, restart count 5 Nov 26 20:49:16.408: INFO: kube-proxy-bootstrap-e2e-minion-group-01xg started at 2022-11-26 20:40:21 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.408: INFO: Container kube-proxy ready: true, restart count 5 Nov 26 20:49:16.408: INFO: coredns-6d97d5ddb-b4rcb started at 2022-11-26 20:40:44 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.408: INFO: Container coredns ready: false, restart count 5 Nov 26 20:49:16.408: INFO: pod-940038ae-c576-42ec-89c2-fcf196059e6e started at 2022-11-26 20:47:37 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.408: INFO: Container write-pod ready: false, restart count 0 Nov 26 20:49:16.408: INFO: execpod-dropz47gk started at 2022-11-26 20:42:08 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.408: INFO: Container agnhost-container ready: false, restart count 2 Nov 26 20:49:16.408: INFO: csi-mockplugin-0 started at 2022-11-26 20:47:27 +0000 UTC (0+4 container statuses recorded) Nov 26 20:49:16.408: INFO: Container busybox ready: true, restart count 2 Nov 26 20:49:16.408: INFO: Container csi-provisioner ready: false, restart count 2 Nov 26 20:49:16.408: INFO: Container driver-registrar ready: true, restart count 2 Nov 26 20:49:16.408: INFO: Container mock ready: true, restart count 2 Nov 26 20:49:16.734: INFO: Latency metrics for node bootstrap-e2e-minion-group-01xg Nov 26 20:49:16.734: INFO: Logging node info for node bootstrap-e2e-minion-group-6k9m Nov 26 20:49:16.778: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6k9m 767c941b-a788-4a8d-ab83-b3689d62fa87 5319 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6k9m kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6k9m topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8054":"bootstrap-e2e-minion-group-6k9m","csi-hostpath-provisioning-1205":"bootstrap-e2e-minion-group-6k9m"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 20:45:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 20:45:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 20:49:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-6k9m,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 20:45:26 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 20:45:26 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 20:45:26 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 20:45:26 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 20:45:26 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 20:45:26 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 20:45:26 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 20:49:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 20:49:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 20:49:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 20:49:11 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.90.109,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6k9m.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6k9m.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:435b7e3e9a4818e01965eecdf511b45e,SystemUUID:435b7e3e-9a48-18e0-1965-eecdf511b45e,BootID:325a5a2f-31cc-40bf-bd7b-a1e8ee820ba8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1205^51f18e9b-6dcb-11ed-9eea-be6c3e70dd45,DevicePath:,},},Config:nil,},} Nov 26 20:49:16.778: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6k9m Nov 26 20:49:16.825: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6k9m Nov 26 20:49:16.887: INFO: ss-1 started at 2022-11-26 20:44:34 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.887: INFO: Container webserver ready: false, restart count 4 Nov 26 20:49:16.887: INFO: csi-hostpathplugin-0 started at 2022-11-26 20:45:52 +0000 UTC (0+7 container statuses recorded) Nov 26 20:49:16.887: INFO: Container csi-attacher ready: true, restart count 1 Nov 26 20:49:16.887: INFO: Container csi-provisioner ready: true, restart count 1 Nov 26 20:49:16.887: INFO: Container csi-resizer ready: true, restart count 1 Nov 26 20:49:16.887: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 26 20:49:16.887: INFO: Container hostpath ready: true, restart count 1 Nov 26 20:49:16.887: INFO: Container liveness-probe ready: true, restart count 1 Nov 26 20:49:16.887: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 26 20:49:16.887: INFO: metadata-proxy-v0.1-ltr6z started at 2022-11-26 20:40:23 +0000 UTC (0+2 container statuses recorded) Nov 26 20:49:16.887: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 20:49:16.887: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 20:49:16.887: INFO: inclusterclient started at 2022-11-26 20:42:20 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.887: INFO: Container inclusterclient ready: false, restart count 0 Nov 26 20:49:16.887: INFO: external-local-update-sg9sz started at 2022-11-26 20:43:06 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.887: INFO: Container netexec ready: true, restart count 5 Nov 26 20:49:16.887: INFO: csi-mockplugin-0 started at 2022-11-26 20:42:22 +0000 UTC (0+4 container statuses recorded) Nov 26 20:49:16.887: INFO: Container busybox ready: false, restart count 5 Nov 26 20:49:16.887: INFO: Container csi-provisioner ready: false, restart count 5 Nov 26 20:49:16.887: INFO: Container driver-registrar ready: false, restart count 5 Nov 26 20:49:16.887: INFO: Container mock ready: false, restart count 5 Nov 26 20:49:16.887: INFO: konnectivity-agent-dvrb2 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.887: INFO: Container konnectivity-agent ready: false, restart count 5 Nov 26 20:49:16.887: INFO: external-provisioner-7z6jw started at 2022-11-26 20:45:50 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.887: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 26 20:49:16.887: INFO: lb-sourcerange-4vxht started at 2022-11-26 20:42:14 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.887: INFO: Container netexec ready: true, restart count 3 Nov 26 20:49:16.887: INFO: l7-default-backend-8549d69d99-c89m7 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.887: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 20:49:16.887: INFO: volume-snapshot-controller-0 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.887: INFO: Container volume-snapshot-controller ready: true, restart count 2 Nov 26 20:49:16.887: INFO: netserver-1 started at 2022-11-26 20:43:07 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.887: INFO: Container webserver ready: false, restart count 5 Nov 26 20:49:16.887: INFO: kube-proxy-bootstrap-e2e-minion-group-6k9m started at 2022-11-26 20:40:22 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.887: INFO: Container kube-proxy ready: true, restart count 5 Nov 26 20:49:16.887: INFO: kube-dns-autoscaler-5f6455f985-mcwh8 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.887: INFO: Container autoscaler ready: false, restart count 4 Nov 26 20:49:16.887: INFO: netserver-1 started at 2022-11-26 20:43:12 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.887: INFO: Container webserver ready: true, restart count 5 Nov 26 20:49:16.887: INFO: coredns-6d97d5ddb-l2p8d started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.887: INFO: Container coredns ready: false, restart count 5 Nov 26 20:49:16.887: INFO: csi-hostpathplugin-0 started at 2022-11-26 20:43:27 +0000 UTC (0+7 container statuses recorded) Nov 26 20:49:16.887: INFO: Container csi-attacher ready: true, restart count 0 Nov 26 20:49:16.887: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 20:49:16.887: INFO: Container csi-resizer ready: true, restart count 0 Nov 26 20:49:16.887: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 26 20:49:16.887: INFO: Container hostpath ready: true, restart count 0 Nov 26 20:49:16.887: INFO: Container liveness-probe ready: true, restart count 0 Nov 26 20:49:16.887: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 26 20:49:16.887: INFO: pod-02e64b2e-aa04-499f-9497-04625a949465 started at 2022-11-26 20:42:11 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:16.887: INFO: Container write-pod ready: false, restart count 0 Nov 26 20:49:17.118: INFO: Latency metrics for node bootstrap-e2e-minion-group-6k9m Nov 26 20:49:17.118: INFO: Logging node info for node bootstrap-e2e-minion-group-b1s2 Nov 26 20:49:17.160: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-b1s2 158a2876-cfb9-4447-8a85-01261d1067a0 5101 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-b1s2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-b1s2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-4070":"bootstrap-e2e-minion-group-b1s2","csi-mock-csi-mock-volumes-1565":"bootstrap-e2e-minion-group-b1s2","csi-mock-csi-mock-volumes-99":"bootstrap-e2e-minion-group-b1s2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 20:45:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 20:47:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 20:47:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-b1s2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 20:45:26 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 20:45:26 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 20:45:26 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 20:45:26 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 20:45:26 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 20:45:26 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 20:45:26 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 20:47:54 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 20:47:54 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 20:47:54 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 20:47:54 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.78.235,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-b1s2.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-b1s2.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:000b31609596f7869267159501efda8f,SystemUUID:000b3160-9596-f786-9267-159501efda8f,BootID:5d963531-fbaa-4a0a-9b1c-e32bb0923886,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-1565^8a9bd5ef-6dcb-11ed-a86a-c2a45d5f0e5c],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 20:49:17.161: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-b1s2 Nov 26 20:49:17.206: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-b1s2 Nov 26 20:49:17.269: INFO: csi-mockplugin-attacher-0 started at 2022-11-26 20:47:27 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 20:49:17.269: INFO: pod-back-off-image started at 2022-11-26 20:47:44 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container back-off ready: false, restart count 3 Nov 26 20:49:17.269: INFO: test-hostpath-type-fbdmh started at 2022-11-26 20:47:53 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container host-path-testing ready: false, restart count 0 Nov 26 20:49:17.269: INFO: csi-mockplugin-attacher-0 started at 2022-11-26 20:43:36 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 20:49:17.269: INFO: kube-proxy-bootstrap-e2e-minion-group-b1s2 started at 2022-11-26 20:40:22 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container kube-proxy ready: true, restart count 5 Nov 26 20:49:17.269: INFO: pod-secrets-c7e6a8a9-801d-4f17-9701-1bd5a7890768 started at 2022-11-26 20:47:25 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 20:49:17.269: INFO: pod-secrets-8d0dbda2-7e8d-45bb-adac-9a886730ca26 started at 2022-11-26 20:43:34 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 20:49:17.269: INFO: metadata-proxy-v0.1-6l49k started at 2022-11-26 20:40:23 +0000 UTC (0+2 container statuses recorded) Nov 26 20:49:17.269: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 20:49:17.269: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 20:49:17.269: INFO: konnectivity-agent-q4nqj started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container konnectivity-agent ready: true, restart count 5 Nov 26 20:49:17.269: INFO: csi-mockplugin-0 started at 2022-11-26 20:43:36 +0000 UTC (0+3 container statuses recorded) Nov 26 20:49:17.269: INFO: Container csi-provisioner ready: true, restart count 1 Nov 26 20:49:17.269: INFO: Container driver-registrar ready: true, restart count 1 Nov 26 20:49:17.269: INFO: Container mock ready: true, restart count 1 Nov 26 20:49:17.269: INFO: ss-0 started at 2022-11-26 20:43:11 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container webserver ready: false, restart count 5 Nov 26 20:49:17.269: INFO: mutability-test-zzngg started at 2022-11-26 20:42:14 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container netexec ready: true, restart count 4 Nov 26 20:49:17.269: INFO: csi-hostpathplugin-0 started at 2022-11-26 20:44:36 +0000 UTC (0+7 container statuses recorded) Nov 26 20:49:17.269: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 20:49:17.269: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 20:49:17.269: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 20:49:17.269: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 20:49:17.269: INFO: Container hostpath ready: true, restart count 3 Nov 26 20:49:17.269: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 20:49:17.269: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 26 20:49:17.269: INFO: pvc-volume-tester-75dgx started at 2022-11-26 20:47:33 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container volume-tester ready: false, restart count 0 Nov 26 20:49:17.269: INFO: netserver-2 started at 2022-11-26 20:43:07 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container webserver ready: false, restart count 2 Nov 26 20:49:17.269: INFO: metrics-server-v0.5.2-867b8754b9-xh56x started at 2022-11-26 20:41:00 +0000 UTC (0+2 container statuses recorded) Nov 26 20:49:17.269: INFO: Container metrics-server ready: false, restart count 4 Nov 26 20:49:17.269: INFO: Container metrics-server-nanny ready: false, restart count 5 Nov 26 20:49:17.269: INFO: execpod-acceptht5sz started at 2022-11-26 20:41:52 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 20:49:17.269: INFO: host-test-container-pod started at 2022-11-26 20:47:06 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container agnhost-container ready: false, restart count 3 Nov 26 20:49:17.269: INFO: netserver-2 started at 2022-11-26 20:43:12 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container webserver ready: false, restart count 6 Nov 26 20:49:17.269: INFO: pod-134805c8-71d2-4d18-9778-ebc4104db725 started at 2022-11-26 20:47:41 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container write-pod ready: false, restart count 0 Nov 26 20:49:17.269: INFO: csi-mockplugin-0 started at 2022-11-26 20:47:27 +0000 UTC (0+3 container statuses recorded) Nov 26 20:49:17.269: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 20:49:17.269: INFO: Container driver-registrar ready: true, restart count 2 Nov 26 20:49:17.269: INFO: Container mock ready: true, restart count 2 Nov 26 20:49:17.269: INFO: test-hostpath-type-7wfxg started at 2022-11-26 20:47:50 +0000 UTC (0+1 container statuses recorded) Nov 26 20:49:17.269: INFO: Container host-path-testing ready: true, restart count 0 Nov 26 20:49:17.518: INFO: Latency metrics for node bootstrap-e2e-minion-group-b1s2 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-2699" for this suite. 11/26/22 20:49:17.518
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\sonly\starget\snodes\swith\sendpoints$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000330d20, {0x75c6f7c, 0x9}, 0xc00383f9b0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000330d20, 0x7f948ced3448?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000330d20, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0012fa000, {0x0, 0x0, 0xc00011bd20?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445 There were additional failures detected after the initial failure: [FAILED] Nov 26 21:27:48.344: failed to list events in namespace "esipp-1895": Get "https://35.233.174.213/api/v1/namespaces/esipp-1895/events": dial tcp 35.233.174.213:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 21:27:48.384: Couldn't delete ns: "esipp-1895": Delete "https://35.233.174.213/api/v1/namespaces/esipp-1895": dial tcp 35.233.174.213:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.174.213/api/v1/namespaces/esipp-1895", Err:(*net.OpError)(0xc002d6d9f0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 21:24:02.21 Nov 26 21:24:02.210: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 21:24:02.211 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 21:24:02.388 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 21:24:02.481 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should only target nodes with endpoints test/e2e/network/loadbalancer.go:1346 STEP: creating a service esipp-1895/external-local-nodes with type=LoadBalancer 11/26/22 21:24:02.725 STEP: setting ExternalTrafficPolicy=Local 11/26/22 21:24:02.725 STEP: waiting for loadbalancer for service esipp-1895/external-local-nodes 11/26/22 21:24:02.812 Nov 26 21:24:02.812: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer STEP: waiting for loadbalancer for service esipp-1895/external-local-nodes 11/26/22 21:24:40.914 Nov 26 21:24:40.914: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer STEP: Performing setup for networking test in namespace esipp-1895 11/26/22 21:24:40.977 STEP: creating a selector 11/26/22 21:24:40.977 STEP: Creating the service pods in kubernetes 11/26/22 21:24:40.977 Nov 26 21:24:40.977: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 26 21:24:41.748: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-1895" to be "running and ready" Nov 26 21:24:41.861: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 113.157581ms Nov 26 21:24:41.861: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 21:24:43.940: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191874034s Nov 26 21:24:43.940: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 21:24:45.974: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.225977316s Nov 26 21:24:45.974: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:24:47.950: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.201694475s Nov 26 21:24:47.950: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:24:49.919: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.170959762s Nov 26 21:24:49.919: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:24:51.950: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.201528743s Nov 26 21:24:51.950: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:24:53.941: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.192794292s Nov 26 21:24:53.941: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:24:55.928: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.17984174s Nov 26 21:24:55.928: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:24:57.919: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.170485012s Nov 26 21:24:57.919: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:24:59.930: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.181752115s Nov 26 21:24:59.930: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:25:01.962: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.214124502s Nov 26 21:25:01.962: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:25:03.918: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.169482233s Nov 26 21:25:03.918: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 26 21:25:03.918: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 26 21:25:03.986: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-1895" to be "running and ready" Nov 26 21:25:04.059: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 73.178429ms Nov 26 21:25:04.059: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:06.131: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.145707286s Nov 26 21:25:06.131: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:08.141: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.155573179s Nov 26 21:25:08.141: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:10.172: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 6.18606129s Nov 26 21:25:10.172: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:12.132: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 8.146572579s Nov 26 21:25:12.132: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:14.325: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 10.338988695s Nov 26 21:25:14.325: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:16.107: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 12.121213727s Nov 26 21:25:16.107: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:18.200: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 14.214620978s Nov 26 21:25:18.200: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:20.111: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 16.125236624s Nov 26 21:25:20.111: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:22.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 18.131577976s Nov 26 21:25:22.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:24.128: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 20.14213402s Nov 26 21:25:24.128: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:26.128: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 22.142361481s Nov 26 21:25:26.128: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:28.250: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 24.264271885s Nov 26 21:25:28.250: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:30.118: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 26.132666249s Nov 26 21:25:30.118: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:32.161: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 28.175763674s Nov 26 21:25:32.161: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:34.132: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 30.146936684s Nov 26 21:25:34.133: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:36.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 32.131535501s Nov 26 21:25:36.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:38.185: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 34.199929272s Nov 26 21:25:38.185: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:40.113: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 36.127379912s Nov 26 21:25:40.113: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:42.127: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 38.141004392s Nov 26 21:25:42.127: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:44.112: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 40.126771867s Nov 26 21:25:44.112: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:46.124: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 42.138812122s Nov 26 21:25:46.124: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:48.122: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 44.136794929s Nov 26 21:25:48.122: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:50.146: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 46.160837985s Nov 26 21:25:50.146: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:52.111: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 48.125036786s Nov 26 21:25:52.111: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:54.118: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 50.132069391s Nov 26 21:25:54.118: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:56.120: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 52.134926229s Nov 26 21:25:56.120: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:58.134: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 54.148729993s Nov 26 21:25:58.134: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:00.143: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 56.157252405s Nov 26 21:26:00.143: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:02.184: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 58.198247897s Nov 26 21:26:02.184: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:04.136: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.150399266s Nov 26 21:26:04.136: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:06.168: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.182679216s Nov 26 21:26:06.168: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:08.136: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.150841195s Nov 26 21:26:08.136: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:10.210: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.224900191s Nov 26 21:26:10.210: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:12.128: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.142694064s Nov 26 21:26:12.128: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:14.129: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.143713009s Nov 26 21:26:14.129: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:16.127: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.141614643s Nov 26 21:26:16.127: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:18.116: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.130547344s Nov 26 21:26:18.116: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:20.140: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.154629249s Nov 26 21:26:20.140: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:22.124: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.138014113s Nov 26 21:26:22.124: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:24.121: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.135150856s Nov 26 21:26:24.121: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:26.122: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.135954765s Nov 26 21:26:26.122: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:28.132: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.146093681s Nov 26 21:26:28.132: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:30.150: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.164221585s Nov 26 21:26:30.150: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:32.132: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.146952029s Nov 26 21:26:32.133: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:34.112: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.126187494s Nov 26 21:26:34.112: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:36.118: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.13267237s Nov 26 21:26:36.118: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:38.132: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.146219763s Nov 26 21:26:38.132: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:40.122: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.136432479s Nov 26 21:26:40.122: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:42.126: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.140701687s Nov 26 21:26:42.126: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:44.130: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.144100713s Nov 26 21:26:44.130: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:46.130: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.144430982s Nov 26 21:26:46.130: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:48.234: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.248566675s Nov 26 21:26:48.234: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:50.153: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.167015471s Nov 26 21:26:50.153: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:52.139: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.153436445s Nov 26 21:26:52.139: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:54.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.131352885s Nov 26 21:26:54.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:56.151: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.165427306s Nov 26 21:26:56.151: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:26:58.128: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.142690587s Nov 26 21:26:58.128: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:00.119: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.133290581s Nov 26 21:27:00.119: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:02.110: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.124774497s Nov 26 21:27:02.110: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:04.118: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.132199302s Nov 26 21:27:04.118: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:06.138: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.152669302s Nov 26 21:27:06.138: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:08.214: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.228106116s Nov 26 21:27:08.214: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:10.126: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.140507243s Nov 26 21:27:10.126: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:12.148: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.162861822s Nov 26 21:27:12.148: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:14.105: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.11970544s Nov 26 21:27:14.105: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:16.133: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.147670875s Nov 26 21:27:16.133: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:18.142: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.156557168s Nov 26 21:27:18.142: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:20.143: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.15717914s Nov 26 21:27:20.143: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:22.136: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.150509823s Nov 26 21:27:22.136: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:24.145: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.159346491s Nov 26 21:27:24.145: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:26.126: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.140794173s Nov 26 21:27:26.126: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:28.228: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.241973307s Nov 26 21:27:28.228: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:30.124: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.138236381s Nov 26 21:27:30.124: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:32.177: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.191766732s Nov 26 21:27:32.177: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:34.143: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.157103923s Nov 26 21:27:34.143: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:36.128: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.142661974s Nov 26 21:27:36.128: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:38.126: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.140575388s Nov 26 21:27:38.126: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:40.140: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.154056429s Nov 26 21:27:40.140: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:42.141: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.155007958s Nov 26 21:27:42.141: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:44.140: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.154759447s Nov 26 21:27:44.140: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:46.117: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.131397987s Nov 26 21:27:46.117: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:27:48.101: INFO: Encountered non-retryable error while getting pod esipp-1895/netserver-1: Get "https://35.233.174.213/api/v1/namespaces/esipp-1895/pods/netserver-1": dial tcp 35.233.174.213:443: connect: connection refused Nov 26 21:27:48.101: INFO: Unexpected error: <*fmt.wrapError | 0xc003095d00>: { msg: "error while waiting for pod esipp-1895/netserver-1 to be running and ready: Get \"https://35.233.174.213/api/v1/namespaces/esipp-1895/pods/netserver-1\": dial tcp 35.233.174.213:443: connect: connection refused", err: <*url.Error | 0xc001acd9b0>{ Op: "Get", URL: "https://35.233.174.213/api/v1/namespaces/esipp-1895/pods/netserver-1", Err: <*net.OpError | 0xc002d6d5e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001acd980>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 174, 213], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003095cc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Nov 26 21:27:48.101: FAIL: error while waiting for pod esipp-1895/netserver-1 to be running and ready: Get "https://35.233.174.213/api/v1/namespaces/esipp-1895/pods/netserver-1": dial tcp 35.233.174.213:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000330d20, {0x75c6f7c, 0x9}, 0xc00383f9b0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000330d20, 0x7f948ced3448?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000330d20, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0012fa000, {0x0, 0x0, 0xc00011bd20?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445 Nov 26 21:27:48.140: INFO: Unexpected error: <*errors.errorString | 0xc000fb3a90>: { s: "failed to get Service \"external-local-nodes\": Get \"https://35.233.174.213/api/v1/namespaces/esipp-1895/services/external-local-nodes\": dial tcp 35.233.174.213:443: connect: connection refused", } Nov 26 21:27:48.140: FAIL: failed to get Service "external-local-nodes": Get "https://35.233.174.213/api/v1/namespaces/esipp-1895/services/external-local-nodes": dial tcp 35.233.174.213:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.5.2() test/e2e/network/loadbalancer.go:1366 +0xae panic({0x70eb7e0, 0xc000130cb0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00217e000, 0xd0}, {0xc0038c9700?, 0xc00217e000?, 0xc0038c9728?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa3f20, 0xc003095d00}, {0x0?, 0xc003ada580?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000330d20, {0x75c6f7c, 0x9}, 0xc00383f9b0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000330d20, 0x7f948ced3448?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000330d20, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0012fa000, {0x0, 0x0, 0xc00011bd20?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1382 +0x445 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 21:27:48.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 21:27:48.180: INFO: Output of kubectl describe svc: Nov 26 21:27:48.180: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=esipp-1895 describe svc --namespace=esipp-1895' Nov 26 21:27:48.304: INFO: rc: 1 Nov 26 21:27:48.304: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 21:27:48.304 STEP: Collecting events from namespace "esipp-1895". 11/26/22 21:27:48.305 Nov 26 21:27:48.344: INFO: Unexpected error: failed to list events in namespace "esipp-1895": <*url.Error | 0xc0030b0b10>: { Op: "Get", URL: "https://35.233.174.213/api/v1/namespaces/esipp-1895/events", Err: <*net.OpError | 0xc002fded20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0029e4b10>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 174, 213], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003a07d80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 21:27:48.344: FAIL: failed to list events in namespace "esipp-1895": Get "https://35.233.174.213/api/v1/namespaces/esipp-1895/events": dial tcp 35.233.174.213:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0038cc5c0, {0xc003ada580, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc003c701a0}, {0xc003ada580, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0038cc650?, {0xc003ada580?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0012fa000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc004830390?, 0xc001dbef50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc004830390?, 0x7fadfa0?}, {0xae73300?, 0xc001dbef80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-1895" for this suite. 11/26/22 21:27:48.345 Nov 26 21:27:48.384: FAIL: Couldn't delete ns: "esipp-1895": Delete "https://35.233.174.213/api/v1/namespaces/esipp-1895": dial tcp 35.233.174.213:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.174.213/api/v1/namespaces/esipp-1895", Err:(*net.OpError)(0xc002d6d9f0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0012fa000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc004830310?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc004830310?, 0x7fe0bc8?}, {0xae73300?, 0x100000000000000?, 0xc0027aa0f0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=LoadBalancer$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000c56380, {0x75c6f7c, 0x9}, 0xc001b99ce0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000c56380, 0x7fbda13c4fd0?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000c56380, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 +0x10a k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 +0x37ffrom junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 21:17:06.851 Nov 26 21:17:06.851: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 21:17:06.852 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 21:17:38.698 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 21:17:38.78 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work for type=LoadBalancer test/e2e/network/loadbalancer.go:1266 STEP: creating a service esipp-3928/external-local-lb with type=LoadBalancer 11/26/22 21:17:39.012 STEP: setting ExternalTrafficPolicy=Local 11/26/22 21:17:39.012 STEP: waiting for loadbalancer for service esipp-3928/external-local-lb 11/26/22 21:17:39.169 Nov 26 21:17:39.169: INFO: Waiting up to 15m0s for service "external-local-lb" to have a LoadBalancer STEP: creating a pod to be part of the service external-local-lb 11/26/22 21:18:25.278 Nov 26 21:18:25.407: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 21:18:25.465: INFO: Found all 1 pods Nov 26 21:18:25.465: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-lb-lqlbn] Nov 26 21:18:25.465: INFO: Waiting up to 2m0s for pod "external-local-lb-lqlbn" in namespace "esipp-3928" to be "running and ready" Nov 26 21:18:25.529: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 64.603277ms Nov 26 21:18:25.529: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:27.592: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1273154s Nov 26 21:18:27.592: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:29.594: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129070433s Nov 26 21:18:29.594: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:31.592: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127602731s Nov 26 21:18:31.592: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:33.579: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11461815s Nov 26 21:18:33.579: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:35.582: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116867665s Nov 26 21:18:35.582: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:37.589: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.123674901s Nov 26 21:18:37.589: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:39.614: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.148674345s Nov 26 21:18:39.614: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:41.605: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 16.14060435s Nov 26 21:18:41.605: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:43.586: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 18.121223227s Nov 26 21:18:43.586: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:45.605: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 20.140607338s Nov 26 21:18:45.605: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:47.587: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 22.122302525s Nov 26 21:18:47.587: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:49.593: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 24.128322929s Nov 26 21:18:49.593: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:51.692: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 26.227597375s Nov 26 21:18:51.692: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:53.592: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 28.127294127s Nov 26 21:18:53.592: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:55.583: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 30.11821222s Nov 26 21:18:55.583: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on '' to be 'Running' but was 'Pending' Nov 26 21:18:57.633: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 32.168407407s Nov 26 21:18:57.633: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on 'bootstrap-e2e-minion-group-b1s2' to be 'Running' but was 'Pending' Nov 26 21:18:59.598: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 34.133480697s Nov 26 21:18:59.598: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on 'bootstrap-e2e-minion-group-b1s2' to be 'Running' but was 'Pending' Nov 26 21:19:01.604: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 36.139006548s Nov 26 21:19:01.604: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on 'bootstrap-e2e-minion-group-b1s2' to be 'Running' but was 'Pending' Nov 26 21:19:03.593: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 38.127701985s Nov 26 21:19:03.593: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on 'bootstrap-e2e-minion-group-b1s2' to be 'Running' but was 'Pending' Nov 26 21:19:05.725: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 40.260263749s Nov 26 21:19:05.725: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on 'bootstrap-e2e-minion-group-b1s2' to be 'Running' but was 'Pending' Nov 26 21:19:07.582: INFO: Pod "external-local-lb-lqlbn": Phase="Pending", Reason="", readiness=false. Elapsed: 42.11749209s Nov 26 21:19:07.582: INFO: Error evaluating pod condition running and ready: want pod 'external-local-lb-lqlbn' on 'bootstrap-e2e-minion-group-b1s2' to be 'Running' but was 'Pending' Nov 26 21:19:09.740: INFO: Pod "external-local-lb-lqlbn": Phase="Running", Reason="", readiness=true. Elapsed: 44.275169067s Nov 26 21:19:09.740: INFO: Pod "external-local-lb-lqlbn" satisfied condition "running and ready" Nov 26 21:19:09.740: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-lb-lqlbn] STEP: waiting for loadbalancer for service esipp-3928/external-local-lb 11/26/22 21:19:09.74 Nov 26 21:19:09.740: INFO: Waiting up to 15m0s for service "external-local-lb" to have a LoadBalancer STEP: reading clientIP using the TCP service's service port via its external VIP 11/26/22 21:19:09.874 Nov 26 21:19:09.874: INFO: Poking "http://34.127.81.71:80/clientip" Nov 26 21:19:19.875: INFO: Poke("http://34.127.81.71:80/clientip"): Get "http://34.127.81.71:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 21:19:21.875: INFO: Poking "http://34.127.81.71:80/clientip" Nov 26 21:19:31.876: INFO: Poke("http://34.127.81.71:80/clientip"): Get "http://34.127.81.71:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 21:19:33.876: INFO: Poking "http://34.127.81.71:80/clientip" Nov 26 21:19:43.876: INFO: Poke("http://34.127.81.71:80/clientip"): Get "http://34.127.81.71:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 21:19:43.876: INFO: Poking "http://34.127.81.71:80/clientip" Nov 26 21:19:53.877: INFO: Poke("http://34.127.81.71:80/clientip"): Get "http://34.127.81.71:80/clientip": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 21:19:55.876: INFO: Poking "http://34.127.81.71:80/clientip" Nov 26 21:20:03.248: INFO: Poke("http://34.127.81.71:80/clientip"): success Nov 26 21:20:03.248: INFO: ClientIP detected by target pod using VIP:SvcPort is 35.224.48.17:60922 STEP: checking if Source IP is preserved 11/26/22 21:20:03.248 Nov 26 21:20:03.344: INFO: Waiting up to 15m0s for service "external-local-lb" to have no LoadBalancer STEP: Performing setup for networking test in namespace esipp-3928 11/26/22 21:20:14.626 STEP: creating a selector 11/26/22 21:20:14.626 STEP: Creating the service pods in kubernetes 11/26/22 21:20:14.627 Nov 26 21:20:14.627: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 26 21:20:14.852: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-3928" to be "running and ready" Nov 26 21:20:14.894: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 41.388684ms Nov 26 21:20:14.894: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 21:20:16.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.084297935s Nov 26 21:20:16.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:20:18.939: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.086777868s Nov 26 21:20:18.939: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:20:20.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.084689661s Nov 26 21:20:20.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:20:22.942: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.089330285s Nov 26 21:20:22.942: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:20:24.951: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.098538802s Nov 26 21:20:24.951: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:20:26.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.084546178s Nov 26 21:20:26.937: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 26 21:20:26.937: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 26 21:20:26.981: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-3928" to be "running and ready" Nov 26 21:20:27.023: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 42.530246ms Nov 26 21:20:27.023: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:29.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.085417406s Nov 26 21:20:29.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:31.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.084873817s Nov 26 21:20:31.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:33.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 6.084674324s Nov 26 21:20:33.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:35.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 8.084108373s Nov 26 21:20:35.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:37.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 10.085362274s Nov 26 21:20:37.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:39.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 12.085733456s Nov 26 21:20:39.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:41.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 14.085179575s Nov 26 21:20:41.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:43.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 16.085507939s Nov 26 21:20:43.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:45.068: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 18.087000791s Nov 26 21:20:45.068: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:47.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 20.085457672s Nov 26 21:20:47.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:49.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 22.085045172s Nov 26 21:20:49.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:51.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 24.085060488s Nov 26 21:20:51.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:53.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 26.08515341s Nov 26 21:20:53.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:55.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 28.08483461s Nov 26 21:20:55.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:57.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 30.085117318s Nov 26 21:20:57.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:20:59.068: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 32.087455216s Nov 26 21:20:59.068: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:01.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 34.084347047s Nov 26 21:21:01.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:03.070: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 36.088956405s Nov 26 21:21:03.070: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:05.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 38.084408843s Nov 26 21:21:05.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:07.067: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 40.086357663s Nov 26 21:21:07.067: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:09.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 42.085576977s Nov 26 21:21:09.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:11.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 44.085093365s Nov 26 21:21:11.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:13.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 46.085022566s Nov 26 21:21:13.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:15.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 48.084526002s Nov 26 21:21:15.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:17.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 50.085107866s Nov 26 21:21:17.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:19.067: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 52.086234169s Nov 26 21:21:19.067: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:21.068: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 54.087882083s Nov 26 21:21:21.069: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:23.067: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 56.086700405s Nov 26 21:21:23.067: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:25.080: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 58.099222843s Nov 26 21:21:25.080: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:27.076: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.095743886s Nov 26 21:21:27.076: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:29.094: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.113598418s Nov 26 21:21:29.094: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:31.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.085094627s Nov 26 21:21:31.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:33.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.084565503s Nov 26 21:21:33.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:35.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.084557524s Nov 26 21:21:35.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:37.073: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.092448246s Nov 26 21:21:37.073: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:39.067: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.085931612s Nov 26 21:21:39.067: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:41.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.085478006s Nov 26 21:21:41.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:43.071: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.090610318s Nov 26 21:21:43.071: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:45.124: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.143709835s Nov 26 21:21:45.124: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:47.075: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.094670342s Nov 26 21:21:47.075: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:49.073: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.092390528s Nov 26 21:21:49.073: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:51.076: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.095635227s Nov 26 21:21:51.076: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:53.069: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.088544649s Nov 26 21:21:53.069: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:55.112: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.131046153s Nov 26 21:21:55.112: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:57.076: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.095289344s Nov 26 21:21:57.076: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:21:59.076: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.095012781s Nov 26 21:21:59.076: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:01.070: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.088992475s Nov 26 21:22:01.070: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:03.078: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.097090762s Nov 26 21:22:03.078: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:05.073: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.091916276s Nov 26 21:22:05.073: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:07.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.085714657s Nov 26 21:22:07.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:09.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.085002839s Nov 26 21:22:09.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:11.071: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.090572158s Nov 26 21:22:11.071: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:13.083: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.102711921s Nov 26 21:22:13.083: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:15.068: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.0871919s Nov 26 21:22:15.068: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:17.105: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.124363618s Nov 26 21:22:17.105: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:19.074: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.093609826s Nov 26 21:22:19.074: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:21.086: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.105725931s Nov 26 21:22:21.086: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:23.099: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.117988705s Nov 26 21:22:23.099: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:25.093: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.11215252s Nov 26 21:22:25.093: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:27.095: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.114837514s Nov 26 21:22:27.095: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:29.103: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.122875945s Nov 26 21:22:29.104: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:31.075: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.094199952s Nov 26 21:22:31.075: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:33.074: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.093057695s Nov 26 21:22:33.074: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:35.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.085789719s Nov 26 21:22:35.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:37.071: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.090820153s Nov 26 21:22:37.071: INFO: The phase of Pod netserver-1 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 5m32.162s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:1266 At [By Step] Creating the service pods in kubernetes (Step Runtime: 2m24.386s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 2775 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc000adc9a8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x70?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000ec95c0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc00213e4e0}, {0xc000ec11b0, 0xa}, {0xc0046e9c60, 0xb}, {0x75ee704, 0x11}, 0xc0036a7440?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc00213e4e0?}, {0xc0046e9c60?, 0x0?}, {0xc000ec11b0?, 0x0?}, 0xc004e4c8a0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000c56380, {0x75c6f7c, 0x9}, 0xc001b99ce0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000c56380, 0x7fbda13c4fd0?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000c56380, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d6e480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 21:22:39.073: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.092722136s Nov 26 21:22:39.073: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:41.070: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.089318102s Nov 26 21:22:41.070: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:43.075: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.094256761s Nov 26 21:22:43.075: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:45.076: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.09556215s Nov 26 21:22:45.076: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:47.089: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.108143552s Nov 26 21:22:47.089: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:49.084: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.103138649s Nov 26 21:22:49.084: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:51.085: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.104613027s Nov 26 21:22:51.085: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:53.100: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.119463929s Nov 26 21:22:53.100: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:55.100: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.119043204s Nov 26 21:22:55.100: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:22:57.067: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.086812178s Nov 26 21:22:57.067: INFO: The phase of Pod netserver-1 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 5m52.164s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 5m20.003s) test/e2e/network/loadbalancer.go:1266 At [By Step] Creating the service pods in kubernetes (Step Runtime: 2m44.388s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 2775 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc000adc9a8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x70?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000ec95c0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc00213e4e0}, {0xc000ec11b0, 0xa}, {0xc0046e9c60, 0xb}, {0x75ee704, 0x11}, 0xc0036a7440?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc00213e4e0?}, {0xc0046e9c60?, 0x0?}, {0xc000ec11b0?, 0x0?}, 0xc004e4c8a0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000c56380, {0x75c6f7c, 0x9}, 0xc001b99ce0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000c56380, 0x7fbda13c4fd0?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000c56380, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d6e480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 21:22:59.067: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.086054827s Nov 26 21:22:59.067: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:01.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.08513101s Nov 26 21:23:01.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:03.069: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.088167548s Nov 26 21:23:03.069: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:05.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.084568048s Nov 26 21:23:05.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:07.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.085775932s Nov 26 21:23:07.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:09.067: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.085952116s Nov 26 21:23:09.067: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:11.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.085409113s Nov 26 21:23:11.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:13.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.085577739s Nov 26 21:23:13.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:15.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.084698105s Nov 26 21:23:15.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:17.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m50.0850842s Nov 26 21:23:17.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 6m12.166s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 5m40.005s) test/e2e/network/loadbalancer.go:1266 At [By Step] Creating the service pods in kubernetes (Step Runtime: 3m4.39s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 2775 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc000adc9a8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x70?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000ec95c0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc00213e4e0}, {0xc000ec11b0, 0xa}, {0xc0046e9c60, 0xb}, {0x75ee704, 0x11}, 0xc0036a7440?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc00213e4e0?}, {0xc0046e9c60?, 0x0?}, {0xc000ec11b0?, 0x0?}, 0xc004e4c8a0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000c56380, {0x75c6f7c, 0x9}, 0xc001b99ce0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000c56380, 0x7fbda13c4fd0?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000c56380, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d6e480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 21:23:19.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m52.084611627s Nov 26 21:23:19.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:21.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m54.085334489s Nov 26 21:23:21.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:23.078: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.097766011s Nov 26 21:23:23.078: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:25.078: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.097424553s Nov 26 21:23:25.078: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:27.068: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.087648684s Nov 26 21:23:27.068: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:29.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m2.084287804s Nov 26 21:23:29.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:31.075: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m4.094642169s Nov 26 21:23:31.075: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:33.070: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m6.089291162s Nov 26 21:23:33.070: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:35.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m8.084965758s Nov 26 21:23:35.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:37.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m10.085819305s Nov 26 21:23:37.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 6m32.168s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 6m0.007s) test/e2e/network/loadbalancer.go:1266 At [By Step] Creating the service pods in kubernetes (Step Runtime: 3m24.392s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 2775 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc000adc9a8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x70?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000ec95c0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc00213e4e0}, {0xc000ec11b0, 0xa}, {0xc0046e9c60, 0xb}, {0x75ee704, 0x11}, 0xc0036a7440?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc00213e4e0?}, {0xc0046e9c60?, 0x0?}, {0xc000ec11b0?, 0x0?}, 0xc004e4c8a0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000c56380, {0x75c6f7c, 0x9}, 0xc001b99ce0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000c56380, 0x7fbda13c4fd0?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000c56380, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d6e480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 21:23:39.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.08568095s Nov 26 21:23:39.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:41.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.084721839s Nov 26 21:23:41.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:43.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.084834511s Nov 26 21:23:43.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:45.067: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.085953323s Nov 26 21:23:45.067: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:47.083: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.102208822s Nov 26 21:23:47.083: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:49.073: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.092538628s Nov 26 21:23:49.073: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:51.066: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.085823772s Nov 26 21:23:51.066: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:53.094: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.11380217s Nov 26 21:23:53.094: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:55.081: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.100066733s Nov 26 21:23:55.081: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:57.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.084538045s Nov 26 21:23:57.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 6m52.17s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 6m20.01s) test/e2e/network/loadbalancer.go:1266 At [By Step] Creating the service pods in kubernetes (Step Runtime: 3m44.395s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 2775 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc000adc9a8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x70?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000ec95c0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc00213e4e0}, {0xc000ec11b0, 0xa}, {0xc0046e9c60, 0xb}, {0x75ee704, 0x11}, 0xc0036a7440?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc00213e4e0?}, {0xc0046e9c60?, 0x0?}, {0xc000ec11b0?, 0x0?}, 0xc004e4c8a0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000c56380, {0x75c6f7c, 0x9}, 0xc001b99ce0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000c56380, 0x7fbda13c4fd0?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000c56380, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d6e480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 21:23:59.071: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.089950222s Nov 26 21:23:59.071: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:01.065: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.084337129s Nov 26 21:24:01.065: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:03.076: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.095654737s Nov 26 21:24:03.076: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:05.110: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.128982672s Nov 26 21:24:05.110: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:07.100: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.119074727s Nov 26 21:24:07.100: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:09.098: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.117587411s Nov 26 21:24:09.098: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:11.095: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.114838783s Nov 26 21:24:11.095: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:13.140: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.159510534s Nov 26 21:24:13.140: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:15.112: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.130909718s Nov 26 21:24:15.112: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:17.209: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.228035182s Nov 26 21:24:17.209: INFO: The phase of Pod netserver-1 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 7m12.173s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 6m40.012s) test/e2e/network/loadbalancer.go:1266 At [By Step] Creating the service pods in kubernetes (Step Runtime: 4m4.397s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 2775 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a7b680, 0xc0008af500) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003a03180, 0xc0008af500, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003f3a000?}, 0xc0008af500?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003f3a000, 0xc0008af500) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc0009ffbf0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc003f3c930, 0xc0000f5f00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0013251c0, 0xc001a1df00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc001a1df00, {0x7fad100, 0xc0013251c0}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc003f3c960, 0xc001a1df00, {0x7fbdccc6da68?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc003f3c960, 0xc001a1df00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc001a1dd00, {0x7fe0bc8, 0xc000136008}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc001a1dd00, {0x7fe0bc8, 0xc000136008}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc0010aca00, {0x7fe0bc8, 0xc000136008}, {0xc0046e9c60, 0xb}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition.func1() test/e2e/framework/pod/wait.go:291 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0xae75720?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc000adc9a8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x70?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000ec95c0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc00213e4e0}, {0xc000ec11b0, 0xa}, {0xc0046e9c60, 0xb}, {0x75ee704, 0x11}, 0xc0036a7440?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc00213e4e0?}, {0xc0046e9c60?, 0x0?}, {0xc000ec11b0?, 0x0?}, 0xc004e4c8a0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000c56380, {0x75c6f7c, 0x9}, 0xc001b99ce0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000c56380, 0x7fbda13c4fd0?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000c56380, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d6e480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 21:24:19.104: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.123109704s Nov 26 21:24:19.104: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:21.107: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.125944766s Nov 26 21:24:21.107: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:23.137: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.15657496s Nov 26 21:24:23.137: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:25.197: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.216730511s Nov 26 21:24:25.197: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:27.105: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.124451494s Nov 26 21:24:27.105: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:29.080: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.09946552s Nov 26 21:24:29.080: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:31.096: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.115638989s Nov 26 21:24:31.096: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:33.127: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.146776837s Nov 26 21:24:33.127: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:35.079: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.098747932s Nov 26 21:24:35.079: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:37.111: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.130438811s Nov 26 21:24:37.111: INFO: The phase of Pod netserver-1 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 7m32.177s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 7m0.016s) test/e2e/network/loadbalancer.go:1266 At [By Step] Creating the service pods in kubernetes (Step Runtime: 4m24.401s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 2775 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a7b680, 0xc000855600) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003a03180, 0xc000855600, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003f3a000?}, 0xc000855600?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003f3a000, 0xc000855600) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc00100f4a0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc003f3c930, 0xc000855500) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0013251c0, 0xc000855400) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc000855400, {0x7fad100, 0xc0013251c0}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc003f3c960, 0xc000855400, {0x7fbdccc6d5b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc003f3c960, 0xc000855400) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc000855200, {0x7fe0bc8, 0xc000136008}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc000855200, {0x7fe0bc8, 0xc000136008}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc0046b27c0, {0x7fe0bc8, 0xc000136008}, {0xc0046e9c60, 0xb}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition.func1() test/e2e/framework/pod/wait.go:291 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0xae75720?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc000adc9a8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x70?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000ec95c0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc00213e4e0}, {0xc000ec11b0, 0xa}, {0xc0046e9c60, 0xb}, {0x75ee704, 0x11}, 0xc0036a7440?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc00213e4e0?}, {0xc0046e9c60?, 0x0?}, {0xc000ec11b0?, 0x0?}, 0xc004e4c8a0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000c56380, {0x75c6f7c, 0x9}, 0xc001b99ce0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000c56380, 0x7fbda13c4fd0?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000c56380, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d6e480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 21:24:39.081: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.100413531s Nov 26 21:24:39.081: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:41.165: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.184489613s Nov 26 21:24:41.165: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:43.090: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.109283499s Nov 26 21:24:43.090: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:45.204: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.223859266s Nov 26 21:24:45.204: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:47.095: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.114288962s Nov 26 21:24:47.095: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:49.121: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.140159922s Nov 26 21:24:49.121: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:51.108: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.126973883s Nov 26 21:24:51.108: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:53.109: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.128523573s Nov 26 21:24:53.109: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:55.094: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.113109302s Nov 26 21:24:55.094: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:24:57.165: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.184421871s Nov 26 21:24:57.165: INFO: The phase of Pod netserver-1 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 7m52.18s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 7m20.019s) test/e2e/network/loadbalancer.go:1266 At [By Step] Creating the service pods in kubernetes (Step Runtime: 4m44.404s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 2775 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a7b680, 0xc0005f8b00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003a03180, 0xc0005f8b00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003f3a000?}, 0xc0005f8b00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003f3a000, 0xc0005f8b00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc00369db30?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc003f3c930, 0xc0005f8a00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0013251c0, 0xc0005f8900) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0005f8900, {0x7fad100, 0xc0013251c0}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc003f3c960, 0xc0005f8900, {0x7fbdccc6da68?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc003f3c960, 0xc0005f8900) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0005f8700, {0x7fe0bc8, 0xc000136008}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0005f8700, {0x7fe0bc8, 0xc000136008}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc0010ad780, {0x7fe0bc8, 0xc000136008}, {0xc0046e9c60, 0xb}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition.func1() test/e2e/framework/pod/wait.go:291 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0xae75720?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc000adc9a8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x70?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000ec95c0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc00213e4e0}, {0xc000ec11b0, 0xa}, {0xc0046e9c60, 0xb}, {0x75ee704, 0x11}, 0xc0036a7440?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc00213e4e0?}, {0xc0046e9c60?, 0x0?}, {0xc000ec11b0?, 0x0?}, 0xc004e4c8a0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000c56380, {0x75c6f7c, 0x9}, 0xc001b99ce0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000c56380, 0x7fbda13c4fd0?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000c56380, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d6e480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 21:24:59.083: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.102173421s Nov 26 21:24:59.083: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:01.092: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.111303141s Nov 26 21:25:01.092: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:03.111: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.130324405s Nov 26 21:25:03.111: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:05.168: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.1877962s Nov 26 21:25:05.168: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:07.089: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.108645725s Nov 26 21:25:07.089: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:09.087: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.106644664s Nov 26 21:25:09.087: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:11.084: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.103638396s Nov 26 21:25:11.084: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:13.139: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.158550901s Nov 26 21:25:13.139: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:15.303: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.322429464s Nov 26 21:25:15.303: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:17.072: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.09180558s Nov 26 21:25:17.072: INFO: The phase of Pod netserver-1 is Running (Ready = false) ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer (Spec Runtime: 8m12.183s) test/e2e/network/loadbalancer.go:1266 In [It] (Node Runtime: 7m40.023s) test/e2e/network/loadbalancer.go:1266 At [By Step] Creating the service pods in kubernetes (Step Runtime: 5m4.408s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 2775 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a7b680, 0xc000d8d500) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003a03180, 0xc000d8d500, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003f3a000?}, 0xc000d8d500?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003f3a000, 0xc000d8d500) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc001b99a40?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc003f3c930, 0xc000d8d400) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0013251c0, 0xc000d8d300) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc000d8d300, {0x7fad100, 0xc0013251c0}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc003f3c960, 0xc000d8d300, {0x7fbdccc6d5b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc003f3c960, 0xc000d8d300) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc000d8d100, {0x7fe0bc8, 0xc000136008}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc000d8d100, {0x7fe0bc8, 0xc000136008}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc004edade0, {0x7fe0bc8, 0xc000136008}, {0xc0046e9c60, 0xb}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition.func1() test/e2e/framework/pod/wait.go:291 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc000136000?}, 0xae75720?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc000adc9a8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x70?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc000ec95c0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc00213e4e0}, {0xc000ec11b0, 0xa}, {0xc0046e9c60, 0xb}, {0x75ee704, 0x11}, 0xc0036a7440?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc00213e4e0?}, {0xc0046e9c60?, 0x0?}, {0xc000ec11b0?, 0x0?}, 0xc004e4c8a0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000c56380, {0x75c6f7c, 0x9}, 0xc001b99ce0) test/e2e/framework/network/utils.go:866 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000c56380, 0x7fbda13c4fd0?) test/e2e/framework/network/utils.go:763 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000c56380, 0x3c?) test/e2e/framework/network/utils.go:778 > k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 > k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 > k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000d6e480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 21:25:19.107: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.126137361s Nov 26 21:25:19.107: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:21.073: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.092045861s Nov 26 21:25:21.073: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:23.081: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.1008696s Nov 26 21:25:23.082: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:25.088: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.107361029s Nov 26 21:25:25.088: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:27.101: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.120813573s Nov 26 21:25:27.101: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:27.168: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.187339353s Nov 26 21:25:27.168: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:25:27.170: INFO: Unexpected error: <*pod.timeoutError | 0xc005083950>: { msg: "timed out while waiting for pod esipp-3928/netserver-1 to be running and ready", observedObjects: [ <*v1.Pod | 0xc002b7fb00>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { Name: "netserver-1", GenerateName: "", Namespace: "esipp-3928", SelfLink: "", UID: "507a9bc8-e641-49ee-bf07-ae3bc3878b8d", ResourceVersion: "13742", Generation: 0, CreationTimestamp: { Time: { wall: 0, ext: 63805094414, loc: { name: "Local", zone: [ {name: "UTC", offset: 0, isDST: false}, ], tx: [ { when: -576460752303423488, index: 0, isstd: false, isutc: false, }, ], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: "UTC", offset: 0, isDST: false}, }, }, }, DeletionTimestamp: nil, DeletionGracePeriodSeconds: nil, Labels: { "selector-76a8fa98-3ff0-4fda-89a3-ac04ce9ee3ab": "true", }, Annotations: nil, OwnerReferences: nil, Finalizers: nil, ManagedFields: [ { Manager: "e2e.test", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63805094414, loc: { name: "Local", zone: [...], tx: [...], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: ..., offset: ..., isDST: ...}, }, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:metadata\":{\"f:labels\":{\".\":{},\"f:selector-76a8fa98-3ff0-4fda-89a3-ac04ce9ee3ab\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"webserver\\\"}\":{\".\":{},\"f:args\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:livenessProbe\":{\".\":{},\"f:failureThreshold\":{},\"f:httpGet\":{\".\":{},\"f:path\":{},\"f:port\":{},\"f:scheme\":{}},\"f:initialDelaySeconds\":{},\"f:periodSeconds\":{},\"f:successThreshold\":{},\"f:timeoutSeconds\":{}},\"f:name\":{},\"f:ports\":{\".\":{},\"k:{\\\"containerPort\\\":8081,\\\"protocol\\\":\\\"UDP\\\"}\":{\".\":{},\"f:containerPort\":{},\"f:name\":{},\"f:protocol\":{}},\"k:{\\\"containerPort\\\":8083,\\\"protocol\\\":\\\"TCP\\\"}\":{\".\":{},\"f:containerPort\":{},\"f:name\":{},\"f:protocol\":{}}},\"f:readinessProbe\":{\".\":{},\"f:failureThreshold\":{},\"f:httpGet\":{\".\":{},\"f:path\":{},\"f:port\":{},\"f:scheme\":{}},\"f:initialDelaySeconds\":{},\"f:periodSeconds\":{},\"f:successThreshold\":{},\"f:timeoutSeconds\":{}},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:nodeSelector\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}", }, Subresource: "", }, { Manager: "kubelet", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63805094609, loc: { name: "Local", zone: [...], tx: [...], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: ..., offset: ..., isDST: ...}, }, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\"f:conditions\":{\"k:{\\\"type\\\":\\\"ContainersReady\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Initialized\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Ready\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}}},\"f:containerStatuses\":{},\"f:hostIP\":{},\"f:phase\":{},\"f:podIP\":{},\"f:podIPs\":{\".\":{},\"k:{\\\"ip\\\":\\\"10.64.2.213\\\"}\":{\".\":{},\"f:ip\":{}}},\"f:startTime\":{}}}", }, Subresource: "status", }, ], }, Spec: { Volumes: [ { Name: "kube-api-access-7pnvx", VolumeSource: { HostPath: nil, EmptyDir: nil, GCEPersistentDisk: nil, AWSElasticBlockStore: nil, GitRepo: nil, Secret: nil, NFS: nil, ISCSI: nil, Glusterfs: nil, PersistentVolumeClaim: nil, RBD: nil, FlexVolume: nil, Cinder: nil, CephFS: nil, Flocker: nil, DownwardAPI: nil, FC: nil, AzureFile: nil, ConfigMap: nil, VsphereVolume: nil, Quobyte: nil, AzureDisk: nil, PhotonPersistentDisk: nil, Projected: { Sources: [ { Secret: ..., DownwardAPI: ..., ConfigMap: ..., ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Nov 26 21:25:27.170: FAIL: timed out while waiting for pod esipp-3928/netserver-1 to be running and ready Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000c56380, {0x75c6f7c, 0x9}, 0xc001b99ce0) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000c56380, 0x7fbda13c4fd0?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000c56380, 0x3c?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc001110000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.3.1() test/e2e/network/loadbalancer.go:1285 +0x10a k8s.io/kubernetes/test/e2e/network.glob..func20.3() test/e2e/network/loadbalancer.go:1312 +0x37f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 21:25:27.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 21:25:27.261: INFO: Output of kubectl describe svc: Nov 26 21:25:27.261: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=esipp-3928 describe svc --namespace=esipp-3928' Nov 26 21:25:27.731: INFO: stderr: "" Nov 26 21:25:27.731: INFO: stdout: "Name: external-local-lb\nNamespace: esipp-3928\nLabels: testid=external-local-lb-9553d4d0-39e7-4382-82fa-ddaec0b4088c\nAnnotations: <none>\nSelector: testid=external-local-lb-9553d4d0-39e7-4382-82fa-ddaec0b4088c\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.175.5\nIPs: 10.0.175.5\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nEndpoints: 10.64.1.171:80\nSession Affinity: None\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 7m40s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 7m3s service-controller Ensured load balancer\n Normal Type 5m24s service-controller LoadBalancer -> ClusterIP\n Normal DeletingLoadBalancer 5m16s service-controller Deleting load balancer\n Normal DeletedLoadBalancer 4m33s (x2 over 4m33s) service-controller Deleted load balancer\n" Nov 26 21:25:27.732: INFO: Name: external-local-lb Namespace: esipp-3928 Labels: testid=external-local-lb-9553d4d0-39e7-4382-82fa-ddaec0b4088c Annotations: <none> Selector: testid=external-local-lb-9553d4d0-39e7-4382-82fa-ddaec0b4088c Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.175.5 IPs: 10.0.175.5 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.64.1.171:80 Session Affinity: None Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 7m40s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 7m3s service-controller Ensured load balancer Normal Type 5m24s service-controller LoadBalancer -> ClusterIP Normal DeletingLoadBalancer 5m16s service-controller Deleting load balancer Normal DeletedLoadBalancer 4m33s (x2 over 4m33s) service-controller Deleted load balancer [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 21:25:27.732 STEP: Collecting events from namespace "esipp-3928". 11/26/22 21:25:27.732 STEP: Found 31 events. 11/26/22 21:25:27.829 Nov 26 21:25:27.829: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for external-local-lb-lqlbn: { } Scheduled: Successfully assigned esipp-3928/external-local-lb-lqlbn to bootstrap-e2e-minion-group-b1s2 Nov 26 21:25:27.829: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-0: { } Scheduled: Successfully assigned esipp-3928/netserver-0 to bootstrap-e2e-minion-group-01xg Nov 26 21:25:27.829: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-1: { } Scheduled: Successfully assigned esipp-3928/netserver-1 to bootstrap-e2e-minion-group-6k9m Nov 26 21:25:27.829: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-2: { } Scheduled: Successfully assigned esipp-3928/netserver-2 to bootstrap-e2e-minion-group-b1s2 Nov 26 21:25:27.829: INFO: At 2022-11-26 21:17:47 +0000 UTC - event for external-local-lb: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 21:25:27.829: INFO: At 2022-11-26 21:18:24 +0000 UTC - event for external-local-lb: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 26 21:25:27.829: INFO: At 2022-11-26 21:18:25 +0000 UTC - event for external-local-lb: {replication-controller } SuccessfulCreate: Created pod: external-local-lb-lqlbn Nov 26 21:25:27.829: INFO: At 2022-11-26 21:19:02 +0000 UTC - event for external-local-lb-lqlbn: {kubelet bootstrap-e2e-minion-group-b1s2} Started: Started container netexec Nov 26 21:25:27.829: INFO: At 2022-11-26 21:19:02 +0000 UTC - event for external-local-lb-lqlbn: {kubelet bootstrap-e2e-minion-group-b1s2} Created: Created container netexec Nov 26 21:25:27.829: INFO: At 2022-11-26 21:19:02 +0000 UTC - event for external-local-lb-lqlbn: {kubelet bootstrap-e2e-minion-group-b1s2} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:03 +0000 UTC - event for external-local-lb: {service-controller } Type: LoadBalancer -> ClusterIP Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:11 +0000 UTC - event for external-local-lb: {service-controller } DeletingLoadBalancer: Deleting load balancer Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:15 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-01xg} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:15 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-01xg} Created: Created container webserver Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:15 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-01xg} Started: Started container webserver Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:15 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6k9m} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:15 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6k9m} Started: Started container webserver Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:15 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6k9m} Created: Created container webserver Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:15 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-b1s2} Started: Started container webserver Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:15 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-b1s2} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:15 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-b1s2} Created: Created container webserver Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:16 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6k9m} Killing: Stopping container webserver Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:17 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6k9m} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:19 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6k9m} BackOff: Back-off restarting failed container webserver in pod netserver-1_esipp-3928(507a9bc8-e641-49ee-bf07-ae3bc3878b8d) Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:25 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-b1s2} Killing: Stopping container webserver Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:25 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-b1s2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:46 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-b1s2} BackOff: Back-off restarting failed container webserver in pod netserver-2_esipp-3928(5cd992ae-e17a-40a4-87ba-ea2ca68970d9) Nov 26 21:25:27.829: INFO: At 2022-11-26 21:20:54 +0000 UTC - event for external-local-lb: {service-controller } DeletedLoadBalancer: Deleted load balancer Nov 26 21:25:27.829: INFO: At 2022-11-26 21:22:35 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-01xg} Killing: Stopping container webserver Nov 26 21:25:27.829: INFO: At 2022-11-26 21:22:36 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-01xg} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 21:25:27.829: INFO: At 2022-11-26 21:22:56 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-01xg} BackOff: Back-off restarting failed container webserver in pod netserver-0_esipp-3928(3327246b-cf6b-4549-93c8-2f241ab27e58) Nov 26 21:25:28.054: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 21:25:28.054: INFO: external-local-lb-lqlbn bootstrap-e2e-minion-group-b1s2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:18:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:19:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:19:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:18:57 +0000 UTC }] Nov 26 21:25:28.054: INFO: netserver-0 bootstrap-e2e-minion-group-01xg Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:20:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:24:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:24:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:20:14 +0000 UTC }] Nov 26 21:25:28.054: INFO: netserver-1 bootstrap-e2e-minion-group-6k9m Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:20:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:20:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:20:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:20:14 +0000 UTC }] Nov 26 21:25:28.054: INFO: netserver-2 bootstrap-e2e-minion-group-b1s2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:20:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:25:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:25:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 21:20:14 +0000 UTC }] Nov 26 21:25:28.054: INFO: Nov 26 21:25:29.046: INFO: Logging node info for node bootstrap-e2e-master Nov 26 21:25:29.100: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 04b1a5e9-52d6-4a70-89ff-f2505e084f23 13534 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 21:23:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 21:23:01 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 21:23:01 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 21:23:01 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 21:23:01 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.233.174.213,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a44d3cc5e5e4f2535b5861e9b365c743,SystemUUID:a44d3cc5-e5e4-f253-5b58-61e9b365c743,BootID:1d002924-7490-440d-a502-3c6b592d9227,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 21:25:29.101: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 21:25:29.219: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 21:25:29.367: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:29.367: INFO: Container kube-apiserver ready: true, restart count 0 Nov 26 21:25:29.367: INFO: metadata-proxy-v0.1-cbwjf started at 2022-11-26 20:40:24 +0000 UTC (0+2 container statuses recorded) Nov 26 21:25:29.367: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 21:25:29.367: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 21:25:29.367: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:29.367: INFO: Container konnectivity-server-container ready: true, restart count 7 Nov 26 21:25:29.367: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 20:39:56 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:29.367: INFO: Container kube-addon-manager ready: true, restart count 2 Nov 26 21:25:29.367: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 20:39:56 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:29.367: INFO: Container l7-lb-controller ready: false, restart count 10 Nov 26 21:25:29.367: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:29.367: INFO: Container kube-controller-manager ready: true, restart count 9 Nov 26 21:25:29.367: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:29.367: INFO: Container kube-scheduler ready: true, restart count 10 Nov 26 21:25:29.367: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:29.367: INFO: Container etcd-container ready: true, restart count 1 Nov 26 21:25:29.367: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 20:39:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:29.367: INFO: Container etcd-container ready: true, restart count 8 Nov 26 21:25:29.715: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 21:25:29.715: INFO: Logging node info for node bootstrap-e2e-minion-group-01xg Nov 26 21:25:29.810: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-01xg e434adee-5fac-4a09-a6a5-0ccaa4657a2a 15661 0 2022-11-26 20:40:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-01xg kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-01xg topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-6265":"bootstrap-e2e-minion-group-01xg","csi-hostpath-provisioning-3335":"bootstrap-e2e-minion-group-01xg","csi-hostpath-provisioning-9574":"bootstrap-e2e-minion-group-01xg","csi-mock-csi-mock-volumes-9096":"bootstrap-e2e-minion-group-01xg"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 21:22:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 21:24:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 21:25:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-01xg,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 21:22:14 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 21:22:14 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 21:22:14 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 21:22:14 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 21:22:14 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 21:22:14 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 21:22:14 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 21:25:00 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 21:25:00 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 21:25:00 +0000 UTC,LastTransitionTime:2022-11-26 20:40:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 21:25:00 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.181.19,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-01xg.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-01xg.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5894cd4cef6fe7904a572f6ea0d30d17,SystemUUID:5894cd4c-ef6f-e790-4a57-2f6ea0d30d17,BootID:974ca52a-d66e-4aa0-b46f-a8ab725b8a91,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-9574^c1e94d3d-6dd0-11ed-b85f-a62c5ac6738c],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9574^c1e94d3d-6dd0-11ed-b85f-a62c5ac6738c,DevicePath:,},},Config:nil,},} Nov 26 21:25:29.811: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-01xg Nov 26 21:25:29.879: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-01xg Nov 26 21:25:30.029: INFO: netserver-0 started at 2022-11-26 21:22:41 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.029: INFO: Container webserver ready: false, restart count 3 Nov 26 21:25:30.029: INFO: kube-proxy-bootstrap-e2e-minion-group-01xg started at 2022-11-26 20:40:21 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.029: INFO: Container kube-proxy ready: true, restart count 12 Nov 26 21:25:30.029: INFO: coredns-6d97d5ddb-b4rcb started at 2022-11-26 20:40:44 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.029: INFO: Container coredns ready: true, restart count 14 Nov 26 21:25:30.029: INFO: affinity-lb-esipp-5qmkx started at 2022-11-26 21:24:10 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.029: INFO: Container affinity-lb-esipp ready: true, restart count 1 Nov 26 21:25:30.029: INFO: netserver-0 started at 2022-11-26 21:20:14 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.029: INFO: Container webserver ready: true, restart count 4 Nov 26 21:25:30.029: INFO: hostpath-injector started at 2022-11-26 21:24:51 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.029: INFO: Container hostpath-injector ready: false, restart count 0 Nov 26 21:25:30.029: INFO: csi-mockplugin-0 started at 2022-11-26 21:21:41 +0000 UTC (0+3 container statuses recorded) Nov 26 21:25:30.029: INFO: Container csi-provisioner ready: true, restart count 1 Nov 26 21:25:30.029: INFO: Container driver-registrar ready: true, restart count 1 Nov 26 21:25:30.029: INFO: Container mock ready: true, restart count 1 Nov 26 21:25:30.029: INFO: csi-mockplugin-attacher-0 started at 2022-11-26 21:21:42 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.029: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 21:25:30.029: INFO: csi-hostpathplugin-0 started at 2022-11-26 21:18:57 +0000 UTC (0+7 container statuses recorded) Nov 26 21:25:30.029: INFO: Container csi-attacher ready: true, restart count 1 Nov 26 21:25:30.029: INFO: Container csi-provisioner ready: true, restart count 1 Nov 26 21:25:30.029: INFO: Container csi-resizer ready: true, restart count 1 Nov 26 21:25:30.029: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 26 21:25:30.029: INFO: Container hostpath ready: true, restart count 1 Nov 26 21:25:30.029: INFO: Container liveness-probe ready: true, restart count 1 Nov 26 21:25:30.029: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 26 21:25:30.029: INFO: pod-subpath-test-dynamicpv-q2l6 started at 2022-11-26 20:42:23 +0000 UTC (1+2 container statuses recorded) Nov 26 21:25:30.029: INFO: Init container init-volume-dynamicpv-q2l6 ready: true, restart count 1 Nov 26 21:25:30.029: INFO: Container test-container-subpath-dynamicpv-q2l6 ready: false, restart count 1 Nov 26 21:25:30.029: INFO: Container test-container-volume-dynamicpv-q2l6 ready: false, restart count 1 Nov 26 21:25:30.029: INFO: netserver-0 started at 2022-11-26 21:24:41 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.029: INFO: Container webserver ready: true, restart count 0 Nov 26 21:25:30.029: INFO: csi-hostpathplugin-0 started at 2022-11-26 21:24:47 +0000 UTC (0+7 container statuses recorded) Nov 26 21:25:30.029: INFO: Container csi-attacher ready: true, restart count 1 Nov 26 21:25:30.029: INFO: Container csi-provisioner ready: true, restart count 1 Nov 26 21:25:30.029: INFO: Container csi-resizer ready: true, restart count 1 Nov 26 21:25:30.029: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 26 21:25:30.029: INFO: Container hostpath ready: true, restart count 1 Nov 26 21:25:30.029: INFO: Container liveness-probe ready: true, restart count 1 Nov 26 21:25:30.029: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 26 21:25:30.029: INFO: metadata-proxy-v0.1-h8gjd started at 2022-11-26 20:40:22 +0000 UTC (0+2 container statuses recorded) Nov 26 21:25:30.029: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 21:25:30.029: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 21:25:30.029: INFO: konnectivity-agent-bgjhj started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.029: INFO: Container konnectivity-agent ready: true, restart count 13 Nov 26 21:25:30.029: INFO: csi-hostpathplugin-0 started at 2022-11-26 21:21:40 +0000 UTC (0+7 container statuses recorded) Nov 26 21:25:30.029: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 21:25:30.029: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 21:25:30.029: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 21:25:30.029: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 21:25:30.029: INFO: Container hostpath ready: true, restart count 2 Nov 26 21:25:30.029: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 21:25:30.029: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 26 21:25:30.444: INFO: Latency metrics for node bootstrap-e2e-minion-group-01xg Nov 26 21:25:30.444: INFO: Logging node info for node bootstrap-e2e-minion-group-6k9m Nov 26 21:25:30.534: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6k9m 767c941b-a788-4a8d-ab83-b3689d62fa87 15523 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6k9m kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6k9m topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-3484":"bootstrap-e2e-minion-group-6k9m","csi-hostpath-provisioning-9955":"bootstrap-e2e-minion-group-6k9m","csi-mock-csi-mock-volumes-8842":"bootstrap-e2e-minion-group-6k9m"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 21:22:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 21:24:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 21:25:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-6k9m,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 21:22:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 21:22:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 21:22:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 21:22:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 21:22:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 21:22:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 21:22:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 21:24:54 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 21:24:54 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 21:24:54 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 21:24:54 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.90.109,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6k9m.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6k9m.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:435b7e3e9a4818e01965eecdf511b45e,SystemUUID:435b7e3e-9a48-18e0-1965-eecdf511b45e,BootID:325a5a2f-31cc-40bf-bd7b-a1e8ee820ba8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 21:25:30.535: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6k9m Nov 26 21:25:30.598: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6k9m Nov 26 21:25:30.732: INFO: metadata-proxy-v0.1-ltr6z started at 2022-11-26 20:40:23 +0000 UTC (0+2 container statuses recorded) Nov 26 21:25:30.732: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 21:25:30.732: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 21:25:30.732: INFO: failure-4 started at 2022-11-26 21:25:26 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container failure-4 ready: false, restart count 0 Nov 26 21:25:30.732: INFO: pod-configmaps-7b2cd58a-0ff8-4192-a0e9-61f817b007a7 started at 2022-11-26 21:22:06 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 21:25:30.732: INFO: back-off-cap started at 2022-11-26 21:23:57 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container back-off-cap ready: false, restart count 3 Nov 26 21:25:30.732: INFO: konnectivity-agent-dvrb2 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container konnectivity-agent ready: false, restart count 11 Nov 26 21:25:30.732: INFO: csi-mockplugin-0 started at 2022-11-26 21:18:57 +0000 UTC (0+3 container statuses recorded) Nov 26 21:25:30.732: INFO: Container csi-provisioner ready: true, restart count 1 Nov 26 21:25:30.732: INFO: Container driver-registrar ready: true, restart count 1 Nov 26 21:25:30.732: INFO: Container mock ready: true, restart count 1 Nov 26 21:25:30.732: INFO: external-local-nodeport-46xcc started at 2022-11-26 21:22:37 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container netexec ready: false, restart count 4 Nov 26 21:25:30.732: INFO: var-expansion-c8bfb129-d4bb-4a32-9aab-69c806d7a4a8 started at 2022-11-26 21:24:01 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container dapi-container ready: false, restart count 0 Nov 26 21:25:30.732: INFO: kube-proxy-bootstrap-e2e-minion-group-6k9m started at 2022-11-26 20:40:22 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container kube-proxy ready: true, restart count 11 Nov 26 21:25:30.732: INFO: kube-dns-autoscaler-5f6455f985-mcwh8 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container autoscaler ready: false, restart count 10 Nov 26 21:25:30.732: INFO: netserver-1 started at 2022-11-26 21:22:41 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container webserver ready: false, restart count 3 Nov 26 21:25:30.732: INFO: l7-default-backend-8549d69d99-c89m7 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 21:25:30.732: INFO: volume-snapshot-controller-0 started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container volume-snapshot-controller ready: true, restart count 10 Nov 26 21:25:30.732: INFO: mutability-test-qp926 started at 2022-11-26 21:25:06 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container netexec ready: true, restart count 1 Nov 26 21:25:30.732: INFO: csi-hostpathplugin-0 started at 2022-11-26 21:24:22 +0000 UTC (0+7 container statuses recorded) Nov 26 21:25:30.732: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 21:25:30.732: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 21:25:30.732: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 21:25:30.732: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 21:25:30.732: INFO: Container hostpath ready: true, restart count 2 Nov 26 21:25:30.732: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 21:25:30.732: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 26 21:25:30.732: INFO: httpd started at 2022-11-26 21:25:14 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container httpd ready: true, restart count 0 Nov 26 21:25:30.732: INFO: coredns-6d97d5ddb-l2p8d started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container coredns ready: false, restart count 12 Nov 26 21:25:30.732: INFO: netserver-1 started at 2022-11-26 21:20:14 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container webserver ready: false, restart count 5 Nov 26 21:25:30.732: INFO: pod-secrets-fb53adfa-3029-48fe-a04d-da319caa794f started at 2022-11-26 21:24:05 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 21:25:30.732: INFO: csi-hostpathplugin-0 started at 2022-11-26 21:24:09 +0000 UTC (0+7 container statuses recorded) Nov 26 21:25:30.732: INFO: Container csi-attacher ready: true, restart count 0 Nov 26 21:25:30.732: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 21:25:30.732: INFO: Container csi-resizer ready: true, restart count 0 Nov 26 21:25:30.732: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 26 21:25:30.732: INFO: Container hostpath ready: true, restart count 0 Nov 26 21:25:30.732: INFO: Container liveness-probe ready: true, restart count 0 Nov 26 21:25:30.732: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 26 21:25:30.732: INFO: affinity-lb-esipp-2crdv started at 2022-11-26 21:24:10 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container affinity-lb-esipp ready: true, restart count 2 Nov 26 21:25:30.732: INFO: netserver-1 started at 2022-11-26 21:24:41 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:30.732: INFO: Container webserver ready: false, restart count 2 Nov 26 21:25:31.203: INFO: Latency metrics for node bootstrap-e2e-minion-group-6k9m Nov 26 21:25:31.203: INFO: Logging node info for node bootstrap-e2e-minion-group-b1s2 Nov 26 21:25:31.302: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-b1s2 158a2876-cfb9-4447-8a85-01261d1067a0 15601 0 2022-11-26 20:40:22 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-b1s2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-b1s2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-3350":"bootstrap-e2e-minion-group-b1s2","csi-hostpath-multivolume-8789":"bootstrap-e2e-minion-group-b1s2","csi-hostpath-provisioning-5357":"bootstrap-e2e-minion-group-b1s2","csi-hostpath-volumeio-3970":"bootstrap-e2e-minion-group-b1s2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 20:40:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 20:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 21:22:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 21:25:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 21:25:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-boskos-gce-project-04/us-west1-b/bootstrap-e2e-minion-group-b1s2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 21:22:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 21:22:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 21:22:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 21:22:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 21:22:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 21:22:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 21:22:12 +0000 UTC,LastTransitionTime:2022-11-26 20:40:25 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 20:40:38 +0000 UTC,LastTransitionTime:2022-11-26 20:40:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 21:25:13 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 21:25:13 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 21:25:13 +0000 UTC,LastTransitionTime:2022-11-26 20:40:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 21:25:13 +0000 UTC,LastTransitionTime:2022-11-26 20:40:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.78.235,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-b1s2.c.k8s-boskos-gce-project-04.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-b1s2.c.k8s-boskos-gce-project-04.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:000b31609596f7869267159501efda8f,SystemUUID:000b3160-9596-f786-9267-159501efda8f,BootID:5d963531-fbaa-4a0a-9b1c-e32bb0923886,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-5357^c98b519f-6dd0-11ed-9ca8-4e5447ed62fc kubernetes.io/csi/csi-mock-csi-mock-volumes-99^e77b887d-6dcb-11ed-aef1-e2bb2c77ff06],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-99^e77b887d-6dcb-11ed-aef1-e2bb2c77ff06,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-5357^c98b519f-6dd0-11ed-9ca8-4e5447ed62fc,DevicePath:,},},Config:nil,},} Nov 26 21:25:31.303: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-b1s2 Nov 26 21:25:31.383: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-b1s2 Nov 26 21:25:31.663: INFO: konnectivity-agent-q4nqj started at 2022-11-26 20:40:38 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:31.663: INFO: Container konnectivity-agent ready: true, restart count 12 Nov 26 21:25:31.663: INFO: netserver-2 started at 2022-11-26 21:20:14 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:31.663: INFO: Container webserver ready: true, restart count 4 Nov 26 21:25:31.663: INFO: csi-hostpathplugin-0 started at 2022-11-26 21:23:30 +0000 UTC (0+7 container statuses recorded) Nov 26 21:25:31.663: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 21:25:31.663: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 21:25:31.663: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 21:25:31.663: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 21:25:31.663: INFO: Container hostpath ready: true, restart count 2 Nov 26 21:25:31.663: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 21:25:31.663: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 26 21:25:31.663: INFO: metadata-proxy-v0.1-6l49k started at 2022-11-26 20:40:23 +0000 UTC (0+2 container statuses recorded) Nov 26 21:25:31.663: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 21:25:31.663: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 21:25:31.663: INFO: affinity-lb-esipp-wsxlx started at 2022-11-26 21:24:10 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:31.663: INFO: Container affinity-lb-esipp ready: true, restart count 0 Nov 26 21:25:31.663: INFO: csi-hostpathplugin-0 started at 2022-11-26 21:18:57 +0000 UTC (0+7 container statuses recorded) Nov 26 21:25:31.663: INFO: Container csi-attacher ready: true, restart count 5 Nov 26 21:25:31.663: INFO: Container csi-provisioner ready: true, restart count 5 Nov 26 21:25:31.663: INFO: Container csi-resizer ready: true, restart count 5 Nov 26 21:25:31.663: INFO: Container csi-snapshotter ready: true, restart count 5 Nov 26 21:25:31.663: INFO: Container hostpath ready: true, restart count 5 Nov 26 21:25:31.663: INFO: Container liveness-probe ready: true, restart count 5 Nov 26 21:25:31.663: INFO: Container node-driver-registrar ready: true, restart count 5 Nov 26 21:25:31.663: INFO: external-local-lb-lqlbn started at 2022-11-26 21:18:57 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:31.663: INFO: Container netexec ready: true, restart count 0 Nov 26 21:25:31.663: INFO: csi-hostpathplugin-0 started at 2022-11-26 21:24:59 +0000 UTC (0+7 container statuses recorded) Nov 26 21:25:31.663: INFO: Container csi-attacher ready: true, restart count 0 Nov 26 21:25:31.663: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 21:25:31.663: INFO: Container csi-resizer ready: true, restart count 0 Nov 26 21:25:31.663: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 26 21:25:31.663: INFO: Container hostpath ready: true, restart count 0 Nov 26 21:25:31.663: INFO: Container liveness-probe ready: true, restart count 0 Nov 26 21:25:31.663: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 26 21:25:31.663: INFO: netserver-2 started at 2022-11-26 21:24:41 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:31.663: INFO: Container webserver ready: true, restart count 1 Nov 26 21:25:31.663: INFO: metrics-server-v0.5.2-867b8754b9-xh56x started at 2022-11-26 20:41:00 +0000 UTC (0+2 container statuses recorded) Nov 26 21:25:31.663: INFO: Container metrics-server ready: true, restart count 12 Nov 26 21:25:31.663: INFO: Container metrics-server-nanny ready: false, restart count 12 Nov 26 21:25:31.663: INFO: test-container-pod started at 2022-11-26 21:23:21 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:31.663: INFO: Container webserver ready: true, restart count 2 Nov 26 21:25:31.663: INFO: pod-subpath-test-dynamicpv-cdch started at 2022-11-26 21:25:04 +0000 UTC (1+2 container statuses recorded) Nov 26 21:25:31.663: INFO: Init container init-volume-dynamicpv-cdch ready: true, restart count 1 Nov 26 21:25:31.663: INFO: Container test-container-subpath-dynamicpv-cdch ready: true, restart count 1 Nov 26 21:25:31.663: INFO: Container test-container-volume-dynamicpv-cdch ready: true, restart count 1 Nov 26 21:25:31.663: INFO: csi-hostpathplugin-0 started at 2022-11-26 21:18:57 +0000 UTC (0+7 container statuses recorded) Nov 26 21:25:31.663: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 21:25:31.663: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 21:25:31.663: INFO: Container csi-resizer ready: true, restart count 3 Nov 26 21:25:31.663: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 26 21:25:31.663: INFO: Container hostpath ready: true, restart count 3 Nov 26 21:25:31.663: INFO: Container liveness-probe ready: true, restart count 3 Nov 26 21:25:31.663: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 26 21:25:31.663: INFO: netserver-2 started at 2022-11-26 21:22:41 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:31.663: INFO: Container webserver ready: true, restart count 0 Nov 26 21:25:31.663: INFO: kube-proxy-bootstrap-e2e-minion-group-b1s2 started at 2022-11-26 20:40:22 +0000 UTC (0+1 container statuses recorded) Nov 26 21:25:31.663: INFO: Container kube-proxy ready: false, restart count 13 Nov 26 21:25:32.124: INFO: Latency metrics for node bootstrap-e2e-minion-group-b1s2 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-3928" for this suite. 11/26/22 21:25:32.124
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=NodePort$'
test/e2e/framework/pod/exec_util.go:126 k8s.io/kubernetes/test/e2e/framework/pod.execCommandInPodWithFullOutput(0x7775853?, {0xc001fac1c8, 0x12}, {0xc0034e6cf0, 0x3, 0x3}) test/e2e/framework/pod/exec_util.go:126 +0x133 k8s.io/kubernetes/test/e2e/framework/pod.ExecShellInPodWithFullOutput(...) test/e2e/framework/pod/exec_util.go:138 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromContainer(0xc00142e9a0, {0x75b767e, 0x4}, {0x75c2d47, 0x8}, {0xc0012d6490, 0xb}, {0xc004ca94c0, 0xa}, 0x2378, ...) test/e2e/framework/network/utils.go:396 +0x32a k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromTestContainer(...) test/e2e/framework/network/utils.go:411 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer.func1() test/e2e/network/util.go:62 +0x91 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001b0000?}, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fcb038, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x58?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001671aa0?, 0xc001c63da8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x0?, 0xc0d8bed39d412f53?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer(0xc00142e9a0, {0xc004ca94c0, 0xa}, 0x7a49, 0x4?, {0x75c2d47, 0x8}) test/e2e/network/util.go:69 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1336 +0x2dc There were additional failures detected after the initial failure: [FAILED] Nov 26 21:27:48.978: failed to list events in namespace "esipp-6606": Get "https://35.233.174.213/api/v1/namespaces/esipp-6606/events": dial tcp 35.233.174.213:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 21:27:49.018: Couldn't delete ns: "esipp-6606": Delete "https://35.233.174.213/api/v1/namespaces/esipp-6606": dial tcp 35.233.174.213:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.174.213/api/v1/namespaces/esipp-6606", Err:(*net.OpError)(0xc0031b2000)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 21:22:37.048 Nov 26 21:22:37.048: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 21:22:37.051 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 21:22:37.196 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 21:22:37.288 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work for type=NodePort test/e2e/network/loadbalancer.go:1314 STEP: creating a service esipp-6606/external-local-nodeport with type=NodePort and ExternalTrafficPolicy=Local 11/26/22 21:22:37.47 STEP: creating a pod to be part of the service external-local-nodeport 11/26/22 21:22:37.559 Nov 26 21:22:37.621: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 21:22:37.671: INFO: Found all 1 pods Nov 26 21:22:37.671: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-nodeport-46xcc] Nov 26 21:22:37.671: INFO: Waiting up to 2m0s for pod "external-local-nodeport-46xcc" in namespace "esipp-6606" to be "running and ready" Nov 26 21:22:37.715: INFO: Pod "external-local-nodeport-46xcc": Phase="Pending", Reason="", readiness=false. Elapsed: 43.246344ms Nov 26 21:22:37.715: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-46xcc' on 'bootstrap-e2e-minion-group-6k9m' to be 'Running' but was 'Pending' Nov 26 21:22:39.818: INFO: Pod "external-local-nodeport-46xcc": Phase="Running", Reason="", readiness=true. Elapsed: 2.14618652s Nov 26 21:22:39.818: INFO: Pod "external-local-nodeport-46xcc" satisfied condition "running and ready" Nov 26 21:22:39.818: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-nodeport-46xcc] STEP: Performing setup for networking test in namespace esipp-6606 11/26/22 21:22:40.916 STEP: creating a selector 11/26/22 21:22:40.916 STEP: Creating the service pods in kubernetes 11/26/22 21:22:40.916 Nov 26 21:22:40.916: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 26 21:22:41.141: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-6606" to be "running and ready" Nov 26 21:22:41.183: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 42.177154ms Nov 26 21:22:41.183: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 21:22:43.230: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.089141202s Nov 26 21:22:43.230: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:22:45.233: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.092279148s Nov 26 21:22:45.233: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:22:47.246: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.104866067s Nov 26 21:22:47.246: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:22:49.261: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.119901004s Nov 26 21:22:49.261: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:22:51.232: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.091359333s Nov 26 21:22:51.232: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:22:53.259: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.118413236s Nov 26 21:22:53.259: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:22:55.235: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.094578569s Nov 26 21:22:55.235: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:22:57.238: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.097561627s Nov 26 21:22:57.238: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:22:59.225: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.084334063s Nov 26 21:22:59.225: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:23:01.225: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.084412029s Nov 26 21:23:01.225: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 21:23:03.307: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.166865245s Nov 26 21:23:03.308: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 26 21:23:03.308: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 26 21:23:03.349: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-6606" to be "running and ready" Nov 26 21:23:03.391: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 41.75762ms Nov 26 21:23:03.391: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:05.435: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.085983449s Nov 26 21:23:05.435: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:07.444: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.094532563s Nov 26 21:23:07.444: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:09.446: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 6.09683232s Nov 26 21:23:09.446: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:11.433: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 8.083987072s Nov 26 21:23:11.433: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:13.434: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 10.085001498s Nov 26 21:23:13.434: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:15.434: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 12.084479825s Nov 26 21:23:15.434: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:17.433: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 14.083785779s Nov 26 21:23:17.433: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:19.439: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 16.089953457s Nov 26 21:23:19.439: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 26 21:23:21.440: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 18.091135505s Nov 26 21:23:21.440: INFO: The phase of Pod netserver-1 is Running (Ready = true) Nov 26 21:23:21.440: INFO: Pod "netserver-1" satisfied condition "running and ready" Nov 26 21:23:21.483: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "esipp-6606" to be "running and ready" Nov 26 21:23:21.526: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 42.62116ms Nov 26 21:23:21.526: INFO: The phase of Pod netserver-2 is Running (Ready = true) Nov 26 21:23:21.526: INFO: Pod "netserver-2" satisfied condition "running and ready" STEP: Creating test pods 11/26/22 21:23:21.568 Nov 26 21:23:21.698: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "esipp-6606" to be "running" Nov 26 21:23:21.752: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 54.063921ms Nov 26 21:23:23.794: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.096762829s Nov 26 21:23:23.794: INFO: Pod "test-container-pod" satisfied condition "running" Nov 26 21:23:23.836: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 STEP: Getting node addresses 11/26/22 21:23:23.836 Nov 26 21:23:23.836: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes 11/26/22 21:23:23.921 Nov 26 21:23:24.091: INFO: Service node-port-service in namespace esipp-6606 found. Nov 26 21:23:24.363: INFO: Service session-affinity-service in namespace esipp-6606 found. STEP: Waiting for NodePort service to expose endpoint 11/26/22 21:23:24.407 Nov 26 21:23:25.407: INFO: Waiting for amount of service:node-port-service endpoints to be 3 STEP: Waiting for Session Affinity service to expose endpoint 11/26/22 21:23:25.448 Nov 26 21:23:26.449: INFO: Waiting for amount of service:session-affinity-service endpoints to be 3 STEP: reading clientIP using the TCP service's NodePort, on node bootstrap-e2e-minion-group-6k9m: 10.138.0.3:31305/clientip 11/26/22 21:23:26.49 Nov 26 21:23:26.532: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:23:26.532: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:23:26.534: INFO: ExecWithOptions: Clientset creation Nov 26 21:23:26.534: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:23:58.742: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:23:58.742: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:23:58.744: INFO: ExecWithOptions: Clientset creation Nov 26 21:23:58.744: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:00.804: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:00.804: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:00.806: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:00.806: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:02.751: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:02.751: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:02.753: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:02.753: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:04.795: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:04.795: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:04.797: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:04.797: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:06.787: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:06.787: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:06.790: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:06.790: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:08.814: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:08.814: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:08.815: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:08.815: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:10.891: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:10.891: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:10.892: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:10.892: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:12.871: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:12.871: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:12.873: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:12.873: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:14.857: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:14.857: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:14.858: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:14.858: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:16.878: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:16.878: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:16.880: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:16.880: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:18.776: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:18.776: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:18.778: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:18.778: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:20.920: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:20.920: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:20.922: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:20.922: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:28.785: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:28.785: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:28.786: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:28.786: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:30.776: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:30.776: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:30.778: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:30.778: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:32.780: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:32.780: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:32.782: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:32.782: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:34.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:34.792: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:34.794: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:34.794: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:36.803: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:36.803: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:36.805: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:36.805: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:38.762: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:38.762: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:38.763: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:38.763: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:40.842: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:40.842: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:40.844: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:40.844: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:42.764: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:42.764: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:42.766: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:42.766: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:44.775: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:44.775: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:44.777: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:44.777: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:46.797: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:46.797: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:46.799: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:46.799: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:48.785: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:48.785: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:48.787: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:48.787: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:50.758: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:50.758: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:50.760: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:50.760: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:52.762: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:52.762: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:52.763: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:52.763: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:24:54.808: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:24:54.808: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:24:54.810: INFO: ExecWithOptions: Clientset creation Nov 26 21:24:54.810: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:26:20.755: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:26:20.755: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:26:20.757: INFO: ExecWithOptions: Clientset creation Nov 26 21:26:20.757: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:27:28.774: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:27:28.774: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:27:28.775: INFO: ExecWithOptions: Clientset creation Nov 26 21:27:28.775: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:27:30.776: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:27:30.776: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:27:30.778: INFO: ExecWithOptions: Clientset creation Nov 26 21:27:30.778: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:27:32.767: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:27:32.767: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:27:32.768: INFO: ExecWithOptions: Clientset creation Nov 26 21:27:32.768: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:27:34.770: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:27:34.770: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:27:34.771: INFO: ExecWithOptions: Clientset creation Nov 26 21:27:34.771: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:27:36.766: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:27:36.766: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:27:36.767: INFO: ExecWithOptions: Clientset creation Nov 26 21:27:36.767: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) ------------------------------ Progress Report for Ginkgo Process #25 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort (Spec Runtime: 5m0.423s) test/e2e/network/loadbalancer.go:1314 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:1314 At [By Step] reading clientIP using the TCP service's NodePort, on node bootstrap-e2e-minion-group-6k9m: 10.138.0.3:31305/clientip (Step Runtime: 4m10.98s) test/e2e/network/loadbalancer.go:1335 Spec Goroutine goroutine 3115 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fcb038, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x58?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001671aa0?, 0xc001c63da8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x0?, 0xc0d8bed39d412f53?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer(0xc00142e9a0, {0xc004ca94c0, 0xa}, 0x7a49, 0x4?, {0x75c2d47, 0x8}) test/e2e/network/util.go:69 > k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1336 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000edc600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 21:27:38.754: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:27:38.754: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:27:38.755: INFO: ExecWithOptions: Clientset creation Nov 26 21:27:38.755: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:27:40.765: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:27:40.765: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:27:40.766: INFO: ExecWithOptions: Clientset creation Nov 26 21:27:40.766: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:27:42.770: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:27:42.770: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:27:42.772: INFO: ExecWithOptions: Clientset creation Nov 26 21:27:42.772: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:27:44.763: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:27:44.763: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:27:44.764: INFO: ExecWithOptions: Clientset creation Nov 26 21:27:44.764: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:27:46.770: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.64.1.206:9080/dial?request=clientip&protocol=http&host=10.138.0.3&port=31305&tries=1'] Namespace:esipp-6606 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 21:27:46.770: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 21:27:46.771: INFO: ExecWithOptions: Clientset creation Nov 26 21:27:46.771: INFO: ExecWithOptions: execute(POST https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.64.1.206%3A9080%2Fdial%3Frequest%3Dclientip%26protocol%3Dhttp%26host%3D10.138.0.3%26port%3D31305%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 21:27:48.741: INFO: Unexpected error: failed to get pod test-container-pod: <*url.Error | 0xc0052c04e0>: { Op: "Get", URL: "https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod", Err: <*net.OpError | 0xc0014960a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0034e7050>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 174, 213], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0032bdf80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 21:27:48.741: FAIL: failed to get pod test-container-pod: Get "https://35.233.174.213/api/v1/namespaces/esipp-6606/pods/test-container-pod": dial tcp 35.233.174.213:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod.execCommandInPodWithFullOutput(0x7775853?, {0xc001fac1c8, 0x12}, {0xc0034e6cf0, 0x3, 0x3}) test/e2e/framework/pod/exec_util.go:126 +0x133 k8s.io/kubernetes/test/e2e/framework/pod.ExecShellInPodWithFullOutput(...) test/e2e/framework/pod/exec_util.go:138 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromContainer(0xc00142e9a0, {0x75b767e, 0x4}, {0x75c2d47, 0x8}, {0xc0012d6490, 0xb}, {0xc004ca94c0, 0xa}, 0x2378, ...) test/e2e/framework/network/utils.go:396 +0x32a k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromTestContainer(...) test/e2e/framework/network/utils.go:411 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer.func1() test/e2e/network/util.go:62 +0x91 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001b0000?}, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fcb038, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x58?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001671aa0?, 0xc001c63da8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x0?, 0xc0d8bed39d412f53?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer(0xc00142e9a0, {0xc004ca94c0, 0xa}, 0x7a49, 0x4?, {0x75c2d47, 0x8}) test/e2e/network/util.go:69 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1336 +0x2dc E1126 21:27:48.741771 8301 runtime.go:79] Observed a panic: types.GinkgoError{Heading:"Your Test Panicked", Message:"When you, or your assertion library, calls Ginkgo's Fail(),\nGinkgo panics to prevent subsequent assertions from running.\n\nNormally Ginkgo rescues this panic so you shouldn't see it.\n\nHowever, if you make an assertion in a goroutine, Ginkgo can't capture the panic.\nTo circumvent this, you should call\n\n\tdefer GinkgoRecover()\n\nat the top of the goroutine that caused this panic.\n\nAlternatively, you may have made an assertion outside of a Ginkgo\nleaf node (e.g. in a container node or some out-of-band function) - please move your assertion to\nan appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...).", DocLink:"mental-model-how-ginkgo-handles-failure", CodeLocation:types.CodeLocation{FileName:"test/e2e/framework/pod/exec_util.go", LineNumber:126, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/pod.execCommandInPodWithFullOutput(0x7775853?, {0xc001fac1c8, 0x12}, {0xc0034e6cf0, 0x3, 0x3})\n\ttest/e2e/framework/pod/exec_util.go:126 +0x133\nk8s.io/kubernetes/test/e2e/framework/pod.ExecShellInPodWithFullOutput(...)\n\ttest/e2e/framework/pod/exec_util.go:138\nk8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromContainer(0xc00142e9a0, {0x75b767e, 0x4}, {0x75c2d47, 0x8}, {0xc0012d6490, 0xb}, {0xc004ca94c0, 0xa}, 0x2378, ...)\n\ttest/e2e/framework/network/utils.go:396 +0x32a\nk8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromTestContainer(...)\n\ttest/e2e/framework/network/utils.go:411\nk8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer.func1()\n\ttest/e2e/network/util.go:62 +0x91\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001b0000?}, 0x0?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fcb038, 0x2fdb16a?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x58?, 0x2fd9d05?, 0x40?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001671aa0?, 0xc001c63da8?, 0x262a967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x0?, 0xc0d8bed39d412f53?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer(0xc00142e9a0, {0xc004ca94c0, 0xa}, 0x7a49, 0x4?, {0x75c2d47, 0x8})\n\ttest/e2e/network/util.go:69 +0x125\nk8s.io/kubernetes/test/e2e/network.glob..func20.4()\n\ttest/e2e/network/loadbalancer.go:1336 +0x2dc", CustomMessage:""}} (�[1m�[38;5;9mYour Test Panicked�[0m �[38;5;243mtest/e2e/framework/pod/exec_util.go:126�[0m When you, or your assertion library, calls Ginkgo's Fail(), Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. However, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. Alternatively, you may have made an assertion outside of a Ginkgo leaf node (e.g. in a container node or some out-of-band function) - please move your assertion to an appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...). �[1mLearn more at:�[0m �[38;5;14m�[4mhttp://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure�[0m ) goroutine 3115 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x70eb7e0?, 0xc000f498f0}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000f498f0?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x70eb7e0, 0xc000f498f0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc001367520, 0xc6}, {0xc0005e37e8?, 0x75b521a?, 0xc0005e3808?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.Fail({0xc001532900, 0xb1}, {0xc0005e3880?, 0xc00008eea0?, 0xc0005e38a8?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fadf60, 0xc0052c04e0}, {0xc0032bdfc0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/pod.execCommandInPodWithFullOutput(0x7775853?, {0xc001fac1c8, 0x12}, {0xc0034e6cf0, 0x3, 0x3}) test/e2e/framework/pod/exec_util.go:126 +0x133 k8s.io/kubernetes/test/e2e/framework/pod.ExecShellInPodWithFullOutput(...) test/e2e/framework/pod/exec_util.go:138 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromContainer(0xc00142e9a0, {0x75b767e, 0x4}, {0x75c2d47, 0x8}, {0xc0012d6490, 0xb}, {0xc004ca94c0, 0xa}, 0x2378, ...) test/e2e/framework/network/utils.go:396 +0x32a k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromTestContainer(...) test/e2e/framework/network/utils.go:411 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer.func1() test/e2e/network/util.go:62 +0x91 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001b0000?}, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fcb038, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x58?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001671aa0?, 0xc001c63da8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x0?, 0xc0d8bed39d412f53?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer(0xc00142e9a0, {0xc004ca94c0, 0xa}, 0x7a49, 0x4?, {0x75c2d47, 0x8}) test/e2e/network/util.go:69 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1336 +0x2dc k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000edc600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 +0x1b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 +0x98 created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 +0xe3d Nov 26 21:27:48.781: INFO: Unexpected error: <*url.Error | 0xc0031df3b0>: { Op: "Delete", URL: "https://35.233.174.213/api/v1/namespaces/esipp-6606/services/external-local-nodeport", Err: <*net.OpError | 0xc00103dc20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0052c0990>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 174, 213], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0003571e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 21:27:48.781: FAIL: Delete "https://35.233.174.213/api/v1/namespaces/esipp-6606/services/external-local-nodeport": dial tcp 35.233.174.213:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.4.1() test/e2e/network/loadbalancer.go:1323 +0xe7 panic({0x70eb7e0, 0xc000f498f0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000f498f0?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xd7 panic({0x70eb7e0, 0xc000f498f0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc001532900, 0xb1}, {0xc0005e3880?, 0xc00008eea0?, 0xc0005e38a8?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fadf60, 0xc0052c04e0}, {0xc0032bdfc0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/pod.execCommandInPodWithFullOutput(0x7775853?, {0xc001fac1c8, 0x12}, {0xc0034e6cf0, 0x3, 0x3}) test/e2e/framework/pod/exec_util.go:126 +0x133 k8s.io/kubernetes/test/e2e/framework/pod.ExecShellInPodWithFullOutput(...) test/e2e/framework/pod/exec_util.go:138 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromContainer(0xc00142e9a0, {0x75b767e, 0x4}, {0x75c2d47, 0x8}, {0xc0012d6490, 0xb}, {0xc004ca94c0, 0xa}, 0x2378, ...) test/e2e/framework/network/utils.go:396 +0x32a k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromTestContainer(...) test/e2e/framework/network/utils.go:411 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer.func1() test/e2e/network/util.go:62 +0x91 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001b0000?}, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc000fcb038, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x58?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc001671aa0?, 0xc001c63da8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x0?, 0xc0d8bed39d412f53?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/network.GetHTTPContentFromTestContainer(0xc00142e9a0, {0xc004ca94c0, 0xa}, 0x7a49, 0x4?, {0x75c2d47, 0x8}) test/e2e/network/util.go:69 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1336 +0x2dc [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 21:27:48.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 21:27:48.821: INFO: Output of kubectl describe svc: Nov 26 21:27:48.821: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.174.213 --kubeconfig=/workspace/.kube/config --namespace=esipp-6606 describe svc --namespace=esipp-6606' Nov 26 21:27:48.939: INFO: rc: 1 Nov 26 21:27:48.939: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 21:27:48.939 STEP: Collecting events from namespace "esipp-6606". 11/26/22 21:27:48.939 Nov 26 21:27:48.978: INFO: Unexpected error: failed to list events in namespace "esipp-6606": <*url.Error | 0xc0052c09c0>: { Op: "Get", URL: "https://35.233.174.213/api/v1/namespaces/esipp-6606/events", Err: <*net.OpError | 0xc001496410>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003708720>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 233, 174, 213], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0003a2380>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 21:27:48.978: FAIL: failed to list events in namespace "esipp-6606": Get "https://35.233.174.213/api/v1/namespaces/esipp-6606/events": dial tcp 35.233.174.213:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0005de5c0, {0xc0037bf270, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002c4c680}, {0xc0037bf270, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0005de650?, {0xc0037bf270?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc001290000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0053111b0?, 0xc005368f50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc005368f40?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0053111b0?, 0x2622c40?}, {0xae73300?, 0xc005368f80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-6606" for this suite. 11/26/22 21:27:48.979 Nov 26 21:27:49.018: FAIL: Couldn't delete ns: "esipp-6606": Delete "https://35.233.174.213/api/v1/namespaces/esipp-6606": dial tcp 35.233.174.213:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.233.174.213/api/v1/namespaces/esipp-6606", Err:(*net.OpError)(0xc0031b2000)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001290000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc005311130?, 0xc001124fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc005311130?, 0x0?}, {0xae73300?, 0x5?, 0xc0031e4990?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfrom\spods$'
test/e2e/network/loadbalancer.go:1429 k8s.io/kubernetes/test/e2e/network.glob..func20.6() test/e2e/network/loadbalancer.go:1429 +0xdd
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 20:51:57.751 Nov 26 20:51:57.752: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 20:51:57.753 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 20:53:36.786 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 20:53:36.906 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work from pods test/e2e/network/loadbalancer.go:1422 STEP: creating a service esipp-2814/external-local-pods with type=LoadBalancer 11/26/22 20:53:37.378 STEP: setting ExternalTrafficPolicy=Local 11/26/22 20:53:37.378 STEP: waiting for loadbalancer for service esipp-2814/external-local-pods 11/26/22 20:53:37.465 Nov 26 20:53:37.465: INFO: Waiting up to 15m0s for service "external-local-pods" to have a LoadBalancer Nov 26 20:55:09.592: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://35.233.174.213/api/v1/namespaces/esipp-2814/services/external-local-pods": stream error: stream ID 631; INTERNAL_ERROR; received from peer Nov 26 20:56:11.580: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://35.233.174.213/api/v1/namespaces/esipp-2814/services/external-local-pods": stream error: stream ID 633; INTERNAL_ERROR; received from peer Nov 26 20:57:13.577: INFO: Retrying .... error trying to get Service external-local-pods: Get "https://35.233.174.213/api/v1/namespaces/esipp-2814/services/external-local-pods": stream error: stream ID 635; INTERNAL_ERROR; received from peer ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should work from pods (Spec Runtime: 6m39.628s) test/e2e/network/loadbalancer.go:1422 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:1422 At [By Step] waiting for loadbalancer for service esipp-2814/external-local-pods (Step Runtime: 4m59.914s) test/e2e/framework/service/jig.go:260 Spec Goroutine goroutine 1464 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc002a1f1a0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xf0?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate