go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sServers\swith\ssupport\sfor\sAPI\schunking\sshould\ssupport\scontinue\slisting\sfrom\sthe\slast\skey\sif\sthe\soriginal\sversion\shas\sbeen\scompacted\saway\,\sthough\sthe\slist\sis\sinconsistent\s\[Slow\]$'
test/e2e/apimachinery/chunking.go:177 k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:177 +0x7fc There were additional failures detected after the initial failure: [FAILED] Nov 25 16:01:11.231: failed to list events in namespace "chunking-9650": Get "https://35.197.125.133/api/v1/namespaces/chunking-9650/events": dial tcp 35.197.125.133:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 16:01:11.272: Couldn't delete ns: "chunking-9650": Delete "https://35.197.125.133/api/v1/namespaces/chunking-9650": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/chunking-9650", Err:(*net.OpError)(0xc0011e71d0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-api-machinery] Servers with support for API chunking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:57:41.85 Nov 25 15:57:41.850: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename chunking 11/25/22 15:57:41.852 Nov 25 15:57:41.891: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:43.931: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:45.931: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:47.931: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:49.931: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:51.931: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:53.931: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:55.931: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:57.932: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:59.931: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:01.931: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:03.931: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:05.931: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:07.931: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:09.931: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:59:33.14 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:59:33.244 [BeforeEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-api-machinery] Servers with support for API chunking test/e2e/apimachinery/chunking.go:51 STEP: creating a large number of resources 11/25/22 15:59:33.452 [It] should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow] test/e2e/apimachinery/chunking.go:126 STEP: retrieving the first page 11/25/22 15:59:51.018 Nov 25 15:59:51.072: INFO: Retrieved 40/40 results with rv 2845 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6Mjg0NSwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 STEP: retrieving the second page until the token expires 11/25/22 15:59:51.072 Nov 25 16:00:11.116: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6Mjg0NSwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 25 16:00:31.177: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6Mjg0NSwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 25 16:00:51.118: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6Mjg0NSwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet STEP: retrieving the second page again with the token received with the error message 11/25/22 16:01:11.112 Nov 25 16:01:11.152: INFO: Unexpected error: failed to list pod templates in namespace: chunking-9650, given inconsistent continue token and limit: 40: <*url.Error | 0xc0031b2090>: { Op: "Get", URL: "https://35.197.125.133/api/v1/namespaces/chunking-9650/podtemplates?limit=40", Err: <*net.OpError | 0xc0011e6fa0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002fff680>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 197, 125, 133], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0011513e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 16:01:11.152: FAIL: failed to list pod templates in namespace: chunking-9650, given inconsistent continue token and limit: 40: Get "https://35.197.125.133/api/v1/namespaces/chunking-9650/podtemplates?limit=40": dial tcp 35.197.125.133:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:177 +0x7fc [AfterEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/node/init/init.go:32 Nov 25 16:01:11.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:01:11.192 STEP: Collecting events from namespace "chunking-9650". 11/25/22 16:01:11.192 Nov 25 16:01:11.231: INFO: Unexpected error: failed to list events in namespace "chunking-9650": <*url.Error | 0xc002fff6b0>: { Op: "Get", URL: "https://35.197.125.133/api/v1/namespaces/chunking-9650/events", Err: <*net.OpError | 0xc003294a00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003302810>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 197, 125, 133], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000212ce0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 16:01:11.231: FAIL: failed to list events in namespace "chunking-9650": Get "https://35.197.125.133/api/v1/namespaces/chunking-9650/events": dial tcp 35.197.125.133:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0034265c0, {0xc002425750, 0xd}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0005e7380}, {0xc002425750, 0xd}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc003426650?, {0xc002425750?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0009e6780) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001058c90?, 0xc003231fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc000365408?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001058c90?, 0x29449fc?}, {0xae73300?, 0xc003231f80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking tear down framework | framework.go:193 STEP: Destroying namespace "chunking-9650" for this suite. 11/25/22 16:01:11.232 Nov 25 16:01:11.272: FAIL: Couldn't delete ns: "chunking-9650": Delete "https://35.197.125.133/api/v1/namespaces/chunking-9650": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/chunking-9650", Err:(*net.OpError)(0xc0011e71d0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0009e6780) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001058bd0?, 0xc003232fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001058bd0?, 0x0?}, {0xae73300?, 0x5?, 0xc00325f350?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\sjobs\swhen\ssuspended\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001091860) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:12:01.553 Nov 25 16:12:01.553: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/25/22 16:12:01.554 Nov 25 16:12:01.593: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:03.634: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:05.633: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:07.633: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:09.634: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:11.634: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:13.634: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:15.633: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:17.633: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:19.633: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:21.633: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:23.633: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:25.634: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:27.633: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:29.634: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:31.634: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:31.674: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:31.674: INFO: Unexpected error: <*errors.errorString | 0xc000239a00>: { s: "timed out waiting for the condition", } Nov 25 16:12:31.674: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001091860) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 25 16:12:31.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:12:31.714 [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\snew\sjobs\swhen\sForbidConcurrent\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0010e3860) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:11:01.177 Nov 25 16:11:01.177: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/25/22 16:11:01.179 Nov 25 16:11:01.219: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:03.259: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:05.260: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:07.260: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:09.259: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:11.259: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:13.259: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:15.259: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:17.260: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:19.259: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:21.258: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:23.259: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:25.260: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:27.259: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:29.259: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:31.259: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:31.298: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:31.298: INFO: Unexpected error: <*errors.errorString | 0xc00017da10>: { s: "timed out waiting for the condition", } Nov 25 16:11:31.299: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0010e3860) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 25 16:11:31.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:11:31.338 [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sBurst\sscaling\sshould\srun\sto\scompletion\seven\swith\sunhealthy\spods\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011681e0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:15:31.101 Nov 25 16:15:31.101: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/25/22 16:15:31.103 Nov 25 16:15:31.143: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:33.183: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:35.183: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:37.183: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:39.183: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:41.182: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:43.182: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:45.183: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:47.182: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:49.183: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:51.183: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:53.183: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:55.183: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:57.183: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:59.183: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:01.183: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:01.223: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:01.223: INFO: Unexpected error: <*errors.errorString | 0xc0001fd990>: { s: "timed out waiting for the condition", } Nov 25 16:16:01.223: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011681e0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 25 16:16:01.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:16:01.263 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sScaling\sshould\shappen\sin\spredictable\sorder\sand\shalt\sif\sany\sstateful\spod\sis\sunhealthy\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/statefulset/rest.go:254 k8s.io/kubernetes/test/e2e/framework/statefulset.update({0x801de88, 0xc0011dcb60}, {0xc000c7d3d0, 0x10}, {0xc000c7d3b8, 0x2}, 0xc001ac7930) test/e2e/framework/statefulset/rest.go:254 +0x1cb k8s.io/kubernetes/test/e2e/framework/statefulset.Scale({0x801de88?, 0xc0011dcb60}, 0x0?, 0x0) test/e2e/framework/statefulset/rest.go:151 +0x165 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:685 +0xa6c There were additional failures detected after the initial failure: [FAILED] Nov 25 16:14:28.437: Get "https://35.197.125.133/apis/apps/v1/namespaces/statefulset-8460/statefulsets": dial tcp 35.197.125.133:443: connect: connection refused In [AfterEach] at: test/e2e/framework/statefulset/rest.go:76 ---------- [FAILED] Nov 25 16:14:28.516: failed to list events in namespace "statefulset-8460": Get "https://35.197.125.133/api/v1/namespaces/statefulset-8460/events": dial tcp 35.197.125.133:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 16:14:28.556: Couldn't delete ns: "statefulset-8460": Delete "https://35.197.125.133/api/v1/namespaces/statefulset-8460": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/statefulset-8460", Err:(*net.OpError)(0xc001d03d60)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:04:51.856 Nov 25 16:04:51.856: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/25/22 16:04:51.858 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 16:04:52.071 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 16:04:52.219 [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:98 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:113 STEP: Creating service test in namespace statefulset-8460 11/25/22 16:04:52.352 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] test/e2e/apps/statefulset.go:587 STEP: Initializing watcher for selector baz=blah,foo=bar 11/25/22 16:04:52.435 STEP: Creating stateful set ss in namespace statefulset-8460 11/25/22 16:04:52.529 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8460 11/25/22 16:04:52.619 Nov 25 16:04:52.683: INFO: Found 0 stateful pods, waiting for 1 Nov 25 16:05:02.746: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 25 16:05:12.777: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod 11/25/22 16:05:12.777 Nov 25 16:05:12.857: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 16:05:13.863: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 25 16:05:13.863: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 25 16:05:13.863: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 25 16:05:13.975: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 25 16:05:24.025: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 25 16:05:24.025: INFO: Waiting for statefulset status.replicas updated to 0 Nov 25 16:05:24.299: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999088s Nov 25 16:05:25.376: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.914435344s Nov 25 16:05:26.457: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.836106736s Nov 25 16:05:27.510: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.755573453s Nov 25 16:05:28.559: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.703453181s Nov 25 16:05:29.608: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.653175989s Nov 25 16:05:30.699: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.605059092s Nov 25 16:05:31.764: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.51392825s Nov 25 16:05:32.817: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.449084507s Nov 25 16:05:33.879: INFO: Verifying statefulset ss doesn't scale past 1 for another 396.180632ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8460 11/25/22 16:05:34.879 Nov 25 16:05:34.936: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:05:35.876: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 25 16:05:35.876: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 25 16:05:35.876: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 25 16:05:35.932: INFO: Found 1 stateful pods, waiting for 3 Nov 25 16:05:46.020: INFO: Found 2 stateful pods, waiting for 3 Nov 25 16:05:55.983: INFO: Found 2 stateful pods, waiting for 3 Nov 25 16:06:06.019: INFO: Found 2 stateful pods, waiting for 3 Nov 25 16:06:16.053: INFO: Found 2 stateful pods, waiting for 3 Nov 25 16:06:25.987: INFO: Found 2 stateful pods, waiting for 3 Nov 25 16:06:36.001: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 16:06:46.038: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 16:06:56.040: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 16:07:06.004: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 16:07:15.990: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 16:07:26.001: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 16:07:35.988: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 16:07:46.032: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 16:07:56.062: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 16:08:06.060: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 16:08:16.056: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 16:08:26.048: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 16:08:36.063: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 16:08:46.017: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 25 16:08:56.019: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 25 16:08:56.019: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 25 16:08:56.019: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order 11/25/22 16:08:56.019 STEP: Scale down will halt with unhealthy stateful pod 11/25/22 16:08:56.019 Nov 25 16:08:56.176: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 16:08:56.969: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 25 16:08:56.969: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 25 16:08:56.969: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 25 16:08:56.969: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 16:08:57.627: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 25 16:08:57.627: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 25 16:08:57.627: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 25 16:08:57.627: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 25 16:08:58.408: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 25 16:08:58.408: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 25 16:08:58.408: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 25 16:08:58.408: INFO: Waiting for statefulset status.replicas updated to 0 Nov 25 16:08:58.493: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 25 16:09:08.613: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 25 16:09:08.613: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Nov 25 16:09:08.613: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Nov 25 16:09:08.810: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999923s Nov 25 16:09:09.867: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.942644838s Nov 25 16:09:10.990: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.885408164s Nov 25 16:09:12.033: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.76265281s Nov 25 16:09:13.081: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.720591641s Nov 25 16:09:14.124: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.671951871s Nov 25 16:09:15.167: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.628776113s Nov 25 16:09:16.273: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.585190822s Nov 25 16:09:17.325: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.480533622s Nov 25 16:09:18.382: INFO: Verifying statefulset ss doesn't scale past 3 for another 427.761526ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8460 11/25/22 16:09:19.383 Nov 25 16:09:19.427: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:09:19.784: INFO: rc: 1 Nov 25 16:09:19.784: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 16:09:29.784: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:09:30.123: INFO: rc: 1 Nov 25 16:09:30.123: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 16:09:40.124: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:09:40.478: INFO: rc: 1 Nov 25 16:09:40.478: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 16:09:50.479: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:09:50.852: INFO: rc: 1 Nov 25 16:09:50.852: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 5m0.579s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 5m0s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8460 (Step Runtime: 33.051s) test/e2e/apps/statefulset.go:683 Spec Goroutine goroutine 711 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0010a1dd0, 0x10}, {0xc0010a1dbc, 0x4}, {0xc0019e3440, 0x38}, 0xc0012333f0?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc0011dcb60?}, 0xc001ac7e20?, {0xc0019e3440, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc0011dcb60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:684 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0017d8480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 774 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc000515a40}, {0x7fbcaa0, 0xc001980c40}, {0xc00199ff38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc000515a40}, {0xc0035352f8?, 0x75b5154?}, {0x7facee0?, 0xc001261f50?}, {0xc00199ff38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 25 16:10:00.853: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:10:01.186: INFO: rc: 1 Nov 25 16:10:01.186: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 16:10:11.186: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:10:11.537: INFO: rc: 1 Nov 25 16:10:11.537: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 5m20.582s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 5m20.003s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8460 (Step Runtime: 53.055s) test/e2e/apps/statefulset.go:683 Spec Goroutine goroutine 711 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0010a1dd0, 0x10}, {0xc0010a1dbc, 0x4}, {0xc0019e3440, 0x38}, 0xc0012333f0?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc0011dcb60?}, 0xc001ac7e20?, {0xc0019e3440, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc0011dcb60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:684 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0017d8480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 774 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc000515a40}, {0x7fbcaa0, 0xc001980c40}, {0xc00199ff38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc000515a40}, {0xc0035352f8?, 0x75b5154?}, {0x7facee0?, 0xc001261f50?}, {0xc00199ff38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 25 16:10:21.538: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:10:21.883: INFO: rc: 1 Nov 25 16:10:21.883: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 Nov 25 16:10:31.884: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:10:32.242: INFO: rc: 1 Nov 25 16:10:32.242: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 5m40.584s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 5m40.005s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8460 (Step Runtime: 1m13.056s) test/e2e/apps/statefulset.go:683 Spec Goroutine goroutine 711 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0010a1dd0, 0x10}, {0xc0010a1dbc, 0x4}, {0xc0019e3440, 0x38}, 0xc0012333f0?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc0011dcb60?}, 0xc001ac7e20?, {0xc0019e3440, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc0011dcb60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:684 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0017d8480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 774 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc000515a40}, {0x7fbcaa0, 0xc001980c40}, {0xc00199ff38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc000515a40}, {0xc0035352f8?, 0x75b5154?}, {0x7facee0?, 0xc001261f50?}, {0xc00199ff38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 25 16:10:42.243: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:10:42.611: INFO: rc: 1 Nov 25 16:10:42.611: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 6m0.587s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 6m0.009s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8460 (Step Runtime: 1m33.06s) test/e2e/apps/statefulset.go:683 Spec Goroutine goroutine 711 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0010a1dd0, 0x10}, {0xc0010a1dbc, 0x4}, {0xc0019e3440, 0x38}, 0xc0012333f0?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc0011dcb60?}, 0xc001ac7e20?, {0xc0019e3440, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc0011dcb60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:684 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0017d8480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 774 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc000515a40}, {0x7fbcaa0, 0xc001980c40}, {0xc00199ff38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc000515a40}, {0xc0035352f8?, 0x75b5154?}, {0x7facee0?, 0xc001261f50?}, {0xc00199ff38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 25 16:10:52.612: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:10:52.974: INFO: rc: 1 Nov 25 16:10:52.974: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 25 16:11:02.975: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:11:03.087: INFO: rc: 1 Nov 25 16:11:03.087: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 6m20.59s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 6m20.011s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8460 (Step Runtime: 1m53.062s) test/e2e/apps/statefulset.go:683 Spec Goroutine goroutine 711 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0010a1dd0, 0x10}, {0xc0010a1dbc, 0x4}, {0xc0019e3440, 0x38}, 0xc0012333f0?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc0011dcb60?}, 0xc001ac7e20?, {0xc0019e3440, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc0011dcb60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:684 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0017d8480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 774 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc000515a40}, {0x7fbcaa0, 0xc001980c40}, {0xc00199ff38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc000515a40}, {0xc0035352f8?, 0x75b5154?}, {0x7facee0?, 0xc001261f50?}, {0xc00199ff38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 25 16:11:13.088: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:11:13.210: INFO: rc: 1 Nov 25 16:11:13.210: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 Nov 25 16:11:23.210: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:11:23.344: INFO: rc: 1 Nov 25 16:11:23.344: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 E1125 16:11:27.286049 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:28.286555 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:29.287555 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:30.288500 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:31.288907 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:32.289703 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 6m40.592s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 6m40.014s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8460 (Step Runtime: 2m13.065s) test/e2e/apps/statefulset.go:683 Spec Goroutine goroutine 711 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0010a1dd0, 0x10}, {0xc0010a1dbc, 0x4}, {0xc0019e3440, 0x38}, 0xc0012333f0?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc0011dcb60?}, 0xc001ac7e20?, {0xc0019e3440, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc0011dcb60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:684 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0017d8480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 774 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc000515a40}, {0x7fbcaa0, 0xc001980c40}, {0xc00199ff38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc000515a40}, {0xc0035352f8?, 0x75b5154?}, {0x7facee0?, 0xc001261f50?}, {0xc00199ff38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ E1125 16:11:33.290798 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" Nov 25 16:11:33.345: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:11:33.466: INFO: rc: 1 Nov 25 16:11:33.466: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 E1125 16:11:34.290945 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:35.291261 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:36.292460 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:37.293108 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:38.293259 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:39.293376 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:40.294230 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:41.295160 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:42.296509 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:43.296516 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" Nov 25 16:11:43.467: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:11:43.598: INFO: rc: 1 Nov 25 16:11:43.598: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 E1125 16:11:44.297348 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:45.298414 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:46.299090 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:47.300173 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:48.301318 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:49.301453 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:50.301569 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:51.302310 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:52.302394 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 7m0.594s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 7m0.016s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8460 (Step Runtime: 2m33.067s) test/e2e/apps/statefulset.go:683 Spec Goroutine goroutine 711 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0010a1dd0, 0x10}, {0xc0010a1dbc, 0x4}, {0xc0019e3440, 0x38}, 0xc0012333f0?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc0011dcb60?}, 0xc001ac7e20?, {0xc0019e3440, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc0011dcb60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:684 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0017d8480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 774 [select, 3 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc000515a40}, {0x7fbcaa0, 0xc001980c40}, {0xc00199ff38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc000515a40}, {0xc0035352f8?, 0x75b5154?}, {0x7facee0?, 0xc001261f50?}, {0xc00199ff38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ E1125 16:11:53.303286 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" Nov 25 16:11:53.599: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:11:53.716: INFO: rc: 1 Nov 25 16:11:53.716: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 E1125 16:11:54.303945 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:55.303890 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:56.304360 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:57.305520 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:58.306258 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:11:59.306442 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:00.307290 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:01.307017 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:02.307510 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:03.307996 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" Nov 25 16:12:03.717: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:12:03.848: INFO: rc: 1 Nov 25 16:12:03.848: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 E1125 16:12:04.308382 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:05.309038 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:06.309453 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:07.310477 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:08.311353 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:09.312260 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:10.312486 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:11.312655 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:12.313634 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 7m20.597s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 7m20.018s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8460 (Step Runtime: 2m53.069s) test/e2e/apps/statefulset.go:683 Spec Goroutine goroutine 711 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0010a1dd0, 0x10}, {0xc0010a1dbc, 0x4}, {0xc0019e3440, 0x38}, 0xc0012333f0?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc0011dcb60?}, 0xc001ac7e20?, {0xc0019e3440, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc0011dcb60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:684 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0017d8480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 774 [select, 3 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc000515a40}, {0x7fbcaa0, 0xc001980c40}, {0xc00199ff38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc000515a40}, {0xc0035352f8?, 0x75b5154?}, {0x7facee0?, 0xc001261f50?}, {0xc00199ff38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ E1125 16:12:13.314190 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" Nov 25 16:12:13.848: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:12:13.981: INFO: rc: 1 Nov 25 16:12:13.982: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 E1125 16:12:14.314759 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:15.315608 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:16.316910 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:17.317919 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:18.317708 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:19.318782 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:20.319012 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:21.320056 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:22.320361 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:23.320524 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" Nov 25 16:12:23.983: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:12:24.099: INFO: rc: 1 Nov 25 16:12:24.099: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 E1125 16:12:24.321166 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:25.322402 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:26.322744 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:27.323657 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:28.323956 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:29.325351 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:30.326232 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:31.326323 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:32.326556 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 7m40.599s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 7m40.021s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8460 (Step Runtime: 3m13.072s) test/e2e/apps/statefulset.go:683 Spec Goroutine goroutine 711 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0010a1dd0, 0x10}, {0xc0010a1dbc, 0x4}, {0xc0019e3440, 0x38}, 0xc0012333f0?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc0011dcb60?}, 0xc001ac7e20?, {0xc0019e3440, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc0011dcb60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:684 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0017d8480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 774 [select, 3 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc000515a40}, {0x7fbcaa0, 0xc001980c40}, {0xc00199ff38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc000515a40}, {0xc0035352f8?, 0x75b5154?}, {0x7facee0?, 0xc001261f50?}, {0xc00199ff38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ E1125 16:12:33.327214 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" Nov 25 16:12:34.100: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:12:34.225: INFO: rc: 1 Nov 25 16:12:34.225: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 E1125 16:12:34.327456 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:35.328316 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:36.328622 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:37.328853 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:38.328885 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:39.329277 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:40.330152 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:41.330823 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:42.331248 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:43.331637 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" Nov 25 16:12:44.227: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' E1125 16:12:44.332111 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" Nov 25 16:12:44.345: INFO: rc: 1 Nov 25 16:12:44.345: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 E1125 16:12:45.333191 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:46.333962 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:47.334443 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:48.335617 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:49.335671 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:50.335944 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:51.336880 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" E1125 16:12:52.337594 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 8m0.602s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 8m0.024s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8460 (Step Runtime: 3m33.075s) test/e2e/apps/statefulset.go:683 Spec Goroutine goroutine 711 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0010a1dd0, 0x10}, {0xc0010a1dbc, 0x4}, {0xc0019e3440, 0x38}, 0xc0012333f0?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc0011dcb60?}, 0xc001ac7e20?, {0xc0019e3440, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc0011dcb60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:684 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0017d8480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 774 [select, 4 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc000515a40}, {0x7fbcaa0, 0xc001980c40}, {0xc00199ff38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc000515a40}, {0xc0035352f8?, 0x75b5154?}, {0x7facee0?, 0xc001261f50?}, {0xc00199ff38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ E1125 16:12:53.338399 10079 retrywatcher.go:130] "Watch failed" err="Get \"https://35.197.125.133/api/v1/namespaces/statefulset-8460/pods?allowWatchBookmarks=true&labelSelector=baz%3Dblah%2Cfoo%3Dbar&resourceVersion=7754&watch=true\": dial tcp 35.197.125.133:443: connect: connection refused" Nov 25 16:12:54.346: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:12:57.263: INFO: rc: 1 Nov 25 16:12:57.263: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 25 16:13:07.264: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:13:07.378: INFO: rc: 1 Nov 25 16:13:07.378: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 8m20.605s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 8m20.026s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8460 (Step Runtime: 3m53.078s) test/e2e/apps/statefulset.go:683 Spec Goroutine goroutine 711 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0010a1dd0, 0x10}, {0xc0010a1dbc, 0x4}, {0xc0019e3440, 0x38}, 0xc0012333f0?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc0011dcb60?}, 0xc001ac7e20?, {0xc0019e3440, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc0011dcb60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:684 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0017d8480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:13:17.379: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:13:17.500: INFO: rc: 1 Nov 25 16:13:17.500: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 Nov 25 16:13:27.500: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:13:27.618: INFO: rc: 1 Nov 25 16:13:27.619: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 8m40.608s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 8m40.029s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8460 (Step Runtime: 4m13.08s) test/e2e/apps/statefulset.go:683 Spec Goroutine goroutine 711 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0010a1dd0, 0x10}, {0xc0010a1dbc, 0x4}, {0xc0019e3440, 0x38}, 0xc0012333f0?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc0011dcb60?}, 0xc001ac7e20?, {0xc0019e3440, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc0011dcb60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:684 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0017d8480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:13:37.619: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:13:37.733: INFO: rc: 1 Nov 25 16:13:37.733: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 Nov 25 16:13:47.733: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:13:47.848: INFO: rc: 1 Nov 25 16:13:47.848: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 9m0.614s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 9m0.035s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8460 (Step Runtime: 4m33.086s) test/e2e/apps/statefulset.go:683 Spec Goroutine goroutine 711 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0010a1dd0, 0x10}, {0xc0010a1dbc, 0x4}, {0xc0019e3440, 0x38}, 0xc0012333f0?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc0011dcb60?}, 0xc001ac7e20?, {0xc0019e3440, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc0011dcb60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:684 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0017d8480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:13:57.848: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:13:57.969: INFO: rc: 1 Nov 25 16:13:57.969: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 Nov 25 16:14:07.970: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:14:08.080: INFO: rc: 1 Nov 25 16:14:08.080: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 9m20.616s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 9m20.037s) test/e2e/apps/statefulset.go:587 At [By Step] Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8460 (Step Runtime: 4m53.089s) test/e2e/apps/statefulset.go:683 Spec Goroutine goroutine 711 [sleep] time.Sleep(0x2540be400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmdWithRetries({0xc0010a1dd0, 0x10}, {0xc0010a1dbc, 0x4}, {0xc0019e3440, 0x38}, 0xc0012333f0?, 0x45d964b800) test/e2e/framework/pod/output/output.go:113 k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods({0x801de88?, 0xc0011dcb60?}, 0xc001ac7e20?, {0xc0019e3440, 0x38}) test/e2e/framework/statefulset/rest.go:240 > k8s.io/kubernetes/test/e2e/apps.restoreHTTPProbe({0x801de88, 0xc0011dcb60}, 0x0?) test/e2e/apps/statefulset.go:1728 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:684 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0017d8480}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:14:18.081: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:14:18.205: INFO: rc: 1 Nov 25 16:14:18.205: INFO: Waiting 10s to retry failed RunHostCmd: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: The connection to the server 35.197.125.133 was refused - did you specify the right host or port? error: exit status 1 Nov 25 16:14:28.206: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 25 16:14:28.317: INFO: rc: 1 Nov 25 16:14:28.318: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Nov 25 16:14:28.318: INFO: Scaling statefulset ss to 0 Nov 25 16:14:28.357: FAIL: failed to get statefulset "ss": Get "https://35.197.125.133/apis/apps/v1/namespaces/statefulset-8460/statefulsets/ss": dial tcp 35.197.125.133:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.update({0x801de88, 0xc0011dcb60}, {0xc000c7d3d0, 0x10}, {0xc000c7d3b8, 0x2}, 0xc001ac7930) test/e2e/framework/statefulset/rest.go:254 +0x1cb k8s.io/kubernetes/test/e2e/framework/statefulset.Scale({0x801de88?, 0xc0011dcb60}, 0x0?, 0x0) test/e2e/framework/statefulset/rest.go:151 +0x165 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:685 +0xa6c [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Nov 25 16:14:28.397: INFO: Deleting all statefulset in ns statefulset-8460 Nov 25 16:14:28.437: INFO: Unexpected error: <*url.Error | 0xc002fef590>: { Op: "Get", URL: "https://35.197.125.133/apis/apps/v1/namespaces/statefulset-8460/statefulsets", Err: <*net.OpError | 0xc0037d36d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0037f2870>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 197, 125, 133], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003483a80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 16:14:28.437: FAIL: Get "https://35.197.125.133/apis/apps/v1/namespaces/statefulset-8460/statefulsets": dial tcp 35.197.125.133:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x801de88, 0xc0011dcb60}, {0xc00374cbd0, 0x10}) test/e2e/framework/statefulset/rest.go:76 +0x113 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:129 +0x1b2 [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 25 16:14:28.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:14:28.477 STEP: Collecting events from namespace "statefulset-8460". 11/25/22 16:14:28.477 Nov 25 16:14:28.516: INFO: Unexpected error: failed to list events in namespace "statefulset-8460": <*url.Error | 0xc002fefce0>: { Op: "Get", URL: "https://35.197.125.133/api/v1/namespaces/statefulset-8460/events", Err: <*net.OpError | 0xc0037d3900>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0037135c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 197, 125, 133], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0014e8000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 16:14:28.516: FAIL: failed to list events in namespace "statefulset-8460": Get "https://35.197.125.133/api/v1/namespaces/statefulset-8460/events": dial tcp 35.197.125.133:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0013405c0, {0xc00374cbd0, 0x10}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0011dcb60}, {0xc00374cbd0, 0x10}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001340650?, {0xc00374cbd0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0011621e0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000cd1780?, 0xc001e34fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc001923408?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000cd1780?, 0x29449fc?}, {0xae73300?, 0xc001e34f80?, 0x2a6d786?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 STEP: Destroying namespace "statefulset-8460" for this suite. 11/25/22 16:14:28.517 Nov 25 16:14:28.556: FAIL: Couldn't delete ns: "statefulset-8460": Delete "https://35.197.125.133/api/v1/namespaces/statefulset-8460": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/statefulset-8460", Err:(*net.OpError)(0xc001d03d60)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0011621e0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000cd16d0?, 0xc00199ffb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000cd16d0?, 0x0?}, {0xae73300?, 0x5?, 0xc00118ce10?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-auth\]\sServiceAccounts\sshould\ssupport\sInClusterConfig\swith\stoken\srotation\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011063c0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:12:01.472 Nov 25 16:12:01.472: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename svcaccounts 11/25/22 16:12:01.474 Nov 25 16:12:01.514: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:03.554: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:05.554: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:07.554: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:09.553: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:11.555: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:13.554: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:15.554: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:17.554: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:19.554: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:21.554: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:23.554: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:25.553: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:27.554: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:29.554: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:31.554: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:31.593: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:31.593: INFO: Unexpected error: <*errors.errorString | 0xc000115d30>: { s: "timed out waiting for the condition", } Nov 25 16:12:31.593: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011063c0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 Nov 25 16:12:31.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:12:31.633 [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swith\s\-\-leave\-stdin\-open$'
test/e2e/kubectl/kubectl.go:589 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.7.7() test/e2e/kubectl/kubectl.go:589 +0x22dfrom junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:04:26.684 Nov 25 16:04:26.684: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/25/22 16:04:26.686 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 16:04:44.517 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 16:04:44.667 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/25/22 16:04:44.793 Nov 25 16:04:44.793: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6816 create -f -' Nov 25 16:04:45.495: INFO: stderr: "" Nov 25 16:04:45.495: INFO: stdout: "pod/httpd created\n" Nov 25 16:04:45.495: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 25 16:04:45.495: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-6816" to be "running and ready" Nov 25 16:04:45.572: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 77.712417ms Nov 25 16:04:45.572: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' to be 'Running' but was 'Pending' Nov 25 16:04:47.636: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141316888s Nov 25 16:04:47.636: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' to be 'Running' but was 'Pending' Nov 25 16:04:49.632: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137508065s Nov 25 16:04:49.632: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' to be 'Running' but was 'Pending' Nov 25 16:04:51.673: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178276924s Nov 25 16:04:51.673: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' to be 'Running' but was 'Pending' Nov 25 16:04:53.671: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176607332s Nov 25 16:04:53.671: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' to be 'Running' but was 'Pending' Nov 25 16:04:55.681: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.186838332s Nov 25 16:04:55.682: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' to be 'Running' but was 'Pending' Nov 25 16:04:57.635: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.140185327s Nov 25 16:04:57.635: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' to be 'Running' but was 'Pending' Nov 25 16:04:59.641: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.146668868s Nov 25 16:04:59.641: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' to be 'Running' but was 'Pending' Nov 25 16:05:01.688: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.1935604s Nov 25 16:05:01.688: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' to be 'Running' but was 'Pending' Nov 25 16:05:03.657: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 18.162806086s Nov 25 16:05:03.658: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:05.644: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 20.149765788s Nov 25 16:05:05.644: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:07.626: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 22.131803854s Nov 25 16:05:07.627: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:09.790: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 24.295336463s Nov 25 16:05:09.790: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:11.729: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 26.234159911s Nov 25 16:05:11.729: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:13.632: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 28.137774363s Nov 25 16:05:13.633: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:15.631: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 30.136605989s Nov 25 16:05:15.631: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:17.640: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 32.144916455s Nov 25 16:05:17.640: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:19.712: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 34.217477636s Nov 25 16:05:19.712: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:21.660: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 36.164958439s Nov 25 16:05:21.660: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:23.625: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 38.129872811s Nov 25 16:05:23.625: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:25.689: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 40.194527796s Nov 25 16:05:25.689: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:27.626: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 42.130957886s Nov 25 16:05:27.626: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:29.639: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 44.143857312s Nov 25 16:05:29.639: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:31.631: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 46.136791245s Nov 25 16:05:31.632: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:33.638: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 48.14348901s Nov 25 16:05:33.638: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:35.655: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 50.160359951s Nov 25 16:05:35.655: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:37.640: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 52.145828039s Nov 25 16:05:37.641: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:39.630: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 54.135057944s Nov 25 16:05:39.630: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:41.659: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 56.163851806s Nov 25 16:05:41.659: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:43.635: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 58.139938133s Nov 25 16:05:43.635: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:45.639: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.144557499s Nov 25 16:05:45.639: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6gq3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:17 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:05:47.627: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 1m2.132009484s Nov 25 16:05:47.627: INFO: Pod "httpd" satisfied condition "running and ready" Nov 25 16:05:47.627: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] [Slow] running a failing command with --leave-stdin-open test/e2e/kubectl/kubectl.go:585 Nov 25 16:05:47.627: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6816 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=Never --pod-running-timeout=2m0s failure-4 --leave-stdin-open -- /bin/sh -c exit 42' Nov 25 16:06:24.149: INFO: rc: 1 Nov 25 16:06:24.150: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc0013d3470>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6816 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=Never --pod-running-timeout=2m0s failure-4 --leave-stdin-open -- /bin/sh -c exit 42:\nCommand stdout:\n\nstderr:\nError from server: Get \"https://10.138.0.5:10250/containerLogs/kubectl-6816/failure-4/failure-4\": No agent available\n\nerror:\nexit status 1", }, Code: 1, } Nov 25 16:06:24.150: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6816 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=Never --pod-running-timeout=2m0s failure-4 --leave-stdin-open -- /bin/sh -c exit 42: Command stdout: stderr: Error from server: Get "https://10.138.0.5:10250/containerLogs/kubectl-6816/failure-4/failure-4": No agent available error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.7.7() test/e2e/kubectl/kubectl.go:589 +0x22d [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/25/22 16:06:24.15 Nov 25 16:06:24.150: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6816 delete --grace-period=0 --force -f -' Nov 25 16:06:24.535: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 25 16:06:24.535: INFO: stdout: "pod \"httpd\" force deleted\n" Nov 25 16:06:24.535: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6816 get rc,svc -l name=httpd --no-headers' Nov 25 16:06:24.966: INFO: stderr: "No resources found in kubectl-6816 namespace.\n" Nov 25 16:06:24.966: INFO: stdout: "" Nov 25 16:06:24.966: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=kubectl-6816 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 25 16:06:25.257: INFO: stderr: "" Nov 25 16:06:25.257: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 25 16:06:25.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:06:25.341 STEP: Collecting events from namespace "kubectl-6816". 11/25/22 16:06:25.341 STEP: Found 13 events. 11/25/22 16:06:25.441 Nov 25 16:06:25.441: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for failure-4: { } Scheduled: Successfully assigned kubectl-6816/failure-4 to bootstrap-e2e-minion-group-sp52 Nov 25 16:06:25.441: INFO: At 2022-11-25 16:04:45 +0000 UTC - event for httpd: {default-scheduler } Scheduled: Successfully assigned kubectl-6816/httpd to bootstrap-e2e-minion-group-6gq3 Nov 25 16:06:25.441: INFO: At 2022-11-25 16:04:46 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-6gq3} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Nov 25 16:06:25.441: INFO: At 2022-11-25 16:04:51 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-6gq3} Created: Created container httpd Nov 25 16:06:25.441: INFO: At 2022-11-25 16:04:51 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-6gq3} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 5.084511327s (5.084523628s including waiting) Nov 25 16:06:25.441: INFO: At 2022-11-25 16:04:52 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-6gq3} Killing: Stopping container httpd Nov 25 16:06:25.441: INFO: At 2022-11-25 16:04:52 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-6gq3} Started: Started container httpd Nov 25 16:06:25.441: INFO: At 2022-11-25 16:04:53 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-6gq3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 16:06:25.441: INFO: At 2022-11-25 16:04:54 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-6gq3} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Nov 25 16:06:25.441: INFO: At 2022-11-25 16:04:57 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-6gq3} BackOff: Back-off restarting failed container httpd in pod httpd_kubectl-6816(75370cfa-72e2-4f8c-9b62-aa7c5746df69) Nov 25 16:06:25.441: INFO: At 2022-11-25 16:06:20 +0000 UTC - event for failure-4: {kubelet bootstrap-e2e-minion-group-sp52} Started: Started container failure-4 Nov 25 16:06:25.441: INFO: At 2022-11-25 16:06:20 +0000 UTC - event for failure-4: {kubelet bootstrap-e2e-minion-group-sp52} Created: Created container failure-4 Nov 25 16:06:25.441: INFO: At 2022-11-25 16:06:20 +0000 UTC - event for failure-4: {kubelet bootstrap-e2e-minion-group-sp52} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Nov 25 16:06:25.538: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 16:06:25.538: INFO: failure-4 bootstrap-e2e-minion-group-sp52 Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:06:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:06:17 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:06:17 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:06:17 +0000 UTC }] Nov 25 16:06:25.539: INFO: Nov 25 16:06:25.661: INFO: Unable to fetch kubectl-6816/failure-4/failure-4 logs: an error on the server ("unknown") has prevented the request from succeeding (get pods failure-4) Nov 25 16:06:25.735: INFO: Logging node info for node bootstrap-e2e-master Nov 25 16:06:25.802: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 9cdf5595-019f-4ae3-b78d-0ecc5e3bede9 3445 0 2022-11-25 15:55:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 16:02:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.197.125.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ac2cf1840f1e13636aadd5beda3bd372,SystemUUID:ac2cf184-0f1e-1363-6aad-d5beda3bd372,BootID:561947ad-30a0-426d-bdea-6c654b08a7a1,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 16:06:25.802: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 16:06:25.884: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 16:06:26.023: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 16:06:26.023: INFO: Logging node info for node bootstrap-e2e-minion-group-6gq3 Nov 25 16:06:26.106: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6gq3 d9dd389c-0f83-4f5d-89ae-55a80abf1a2f 5545 0 2022-11-25 15:55:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6gq3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6gq3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-1436":"bootstrap-e2e-minion-group-6gq3","csi-hostpath-provisioning-9498":"bootstrap-e2e-minion-group-6gq3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 16:05:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 16:06:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 16:06:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-6gq3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:05:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:05:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:05:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:05:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:05:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:05:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:05:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:06:22 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:06:22 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:06:22 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:06:22 +0000 UTC,LastTransitionTime:2022-11-25 15:55:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.145.38.169,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a65d19069fa2a7e527b61eb4bd24dd95,SystemUUID:a65d1906-9fa2-a7e5-27b6-1eb4bd24dd95,BootID:f0b831b9-bee5-4ef5-bc7f-65152df7ae5a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-1436^eeb7b306-6cda-11ed-bacc-ee4d4a7a69be kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-1436^eeb7b306-6cda-11ed-bacc-ee4d4a7a69be,DevicePath:,},},Config:nil,},} Nov 25 16:06:26.106: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6gq3 Nov 25 16:06:26.162: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6gq3 Nov 25 16:06:26.267: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6gq3: error trying to reach service: No agent available Nov 25 16:06:26.267: INFO: Logging node info for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:06:26.369: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-9cl6 074fe96a-325f-4d5f-83a2-c840a04a6f6e 5557 0 2022-11-25 15:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-9cl6 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-9cl6 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 16:04:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-25 16:05:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:06:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-9cl6,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:05:43 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:05:43 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:05:43 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:05:43 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:05:43 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:05:43 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:05:43 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:05:08 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:05:08 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:05:08 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:05:08 +0000 UTC,LastTransitionTime:2022-11-25 15:55:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.203.132.179,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8858ca8f7f864c182ba49f423846650c,SystemUUID:8858ca8f-7f86-4c18-2ba4-9f423846650c,BootID:fbd96363-13a6-49de-a1fa-0e73a4570da5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 16:06:26.370: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:06:26.443: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-9cl6 Nov 25 16:06:26.607: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-9cl6: error trying to reach service: No agent available Nov 25 16:06:26.607: INFO: Logging node info for node bootstrap-e2e-minion-group-sp52 Nov 25 16:06:26.675: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-sp52 50f2d6f8-49b3-493a-a11a-263fafdd25f0 5566 0 2022-11-25 15:55:42 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-sp52 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-sp52 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-4816":"bootstrap-e2e-minion-group-sp52","csi-mock-csi-mock-volumes-8227":"csi-mock-csi-mock-volumes-8227","csi-mock-csi-mock-volumes-9286":"bootstrap-e2e-minion-group-sp52"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 15:59:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 16:05:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:06:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-sp52,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:05:47 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:05:47 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:05:47 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:05:47 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:05:47 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:05:47 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:05:47 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:05:20 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:05:20 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:05:20 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:05:20 +0000 UTC,LastTransitionTime:2022-11-25 15:55:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.33.187,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4377d7377743ba64e8758a2f00cb7bc9,SystemUUID:4377d737-7743-ba64-e875-8a2f00cb7bc9,BootID:601334d8-63bd-4289-88cf-b3039f865736,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e,DevicePath:,},},Config:nil,},} Nov 25 16:06:26.675: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-sp52 Nov 25 16:06:26.731: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-sp52 Nov 25 16:06:26.902: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-sp52: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-6816" for this suite. 11/25/22 16:06:26.902
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0010f22d0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:14:15.027 Nov 25 16:14:15.027: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/25/22 16:14:15.029 Nov 25 16:14:15.069: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:17.109: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:19.109: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:21.110: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:23.109: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:25.109: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:27.110: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:29.109: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:31.109: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:33.109: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:35.109: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:37.109: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:39.109: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:41.109: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:43.109: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:45.109: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:45.149: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:45.149: INFO: Unexpected error: <*errors.errorString | 0xc00017da20>: { s: "timed out waiting for the condition", } Nov 25 16:14:45.149: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0010f22d0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 25 16:14:45.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:14:45.189 [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sAddon\supdate\sshould\spropagate\sadd\-on\sfile\schanges\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011e9d10) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-cloud-provider-gcp] Addon update set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:02:12.176 Nov 25 16:02:12.176: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename addon-update-test 11/25/22 16:02:12.179 Nov 25 16:04:12.226: INFO: Unexpected error: <*fmt.wrapError | 0xc004c4e020>: { msg: "wait for service account \"default\" in namespace \"addon-update-test-8457\": timed out waiting for the condition", err: <*errors.errorString | 0xc00017da10>{ s: "timed out waiting for the condition", }, } Nov 25 16:04:12.226: FAIL: wait for service account "default" in namespace "addon-update-test-8457": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011e9d10) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-cloud-provider-gcp] Addon update test/e2e/framework/node/init/init.go:32 Nov 25 16:04:12.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-cloud-provider-gcp] Addon update test/e2e/cloud/gcp/addon_update.go:237 [DeferCleanup (Each)] [sig-cloud-provider-gcp] Addon update dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:04:12.27 STEP: Collecting events from namespace "addon-update-test-8457". 11/25/22 16:04:12.27 STEP: Found 0 events. 11/25/22 16:04:25.212 Nov 25 16:04:25.279: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 16:04:25.279: INFO: Nov 25 16:04:25.339: INFO: Logging node info for node bootstrap-e2e-master Nov 25 16:04:25.438: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 9cdf5595-019f-4ae3-b78d-0ecc5e3bede9 3445 0 2022-11-25 15:55:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 16:02:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.197.125.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ac2cf1840f1e13636aadd5beda3bd372,SystemUUID:ac2cf184-0f1e-1363-6aad-d5beda3bd372,BootID:561947ad-30a0-426d-bdea-6c654b08a7a1,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 16:04:25.438: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 16:04:25.545: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 16:04:25.622: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 16:04:25.622: INFO: Logging node info for node bootstrap-e2e-minion-group-6gq3 Nov 25 16:04:25.665: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6gq3 d9dd389c-0f83-4f5d-89ae-55a80abf1a2f 3768 0 2022-11-25 15:55:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6gq3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6gq3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-9498":"bootstrap-e2e-minion-group-6gq3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 15:57:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 16:00:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:04:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-6gq3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:03:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:03:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:03:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:03:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.145.38.169,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a65d19069fa2a7e527b61eb4bd24dd95,SystemUUID:a65d1906-9fa2-a7e5-27b6-1eb4bd24dd95,BootID:f0b831b9-bee5-4ef5-bc7f-65152df7ae5a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be,DevicePath:,},},Config:nil,},} Nov 25 16:04:25.666: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6gq3 Nov 25 16:04:25.717: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6gq3 Nov 25 16:04:25.782: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6gq3: error trying to reach service: No agent available Nov 25 16:04:25.782: INFO: Logging node info for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:04:25.828: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-9cl6 074fe96a-325f-4d5f-83a2-c840a04a6f6e 3697 0 2022-11-25 15:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-9cl6 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-9cl6 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8031":"bootstrap-e2e-minion-group-9cl6"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 15:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 16:00:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:03:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-9cl6,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.203.132.179,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8858ca8f7f864c182ba49f423846650c,SystemUUID:8858ca8f-7f86-4c18-2ba4-9f423846650c,BootID:fbd96363-13a6-49de-a1fa-0e73a4570da5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8031^2e633bf6-6cda-11ed-a8be-5a65049ea7a3,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8031^2d80c2fb-6cda-11ed-a8be-5a65049ea7a3,DevicePath:,},},Config:nil,},} Nov 25 16:04:25.828: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:04:25.877: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-9cl6 Nov 25 16:04:25.945: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-9cl6: error trying to reach service: No agent available Nov 25 16:04:25.945: INFO: Logging node info for node bootstrap-e2e-minion-group-sp52 Nov 25 16:04:25.990: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-sp52 50f2d6f8-49b3-493a-a11a-263fafdd25f0 3722 0 2022-11-25 15:55:42 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-sp52 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-sp52 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-4245":"bootstrap-e2e-minion-group-sp52"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 15:59:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 16:00:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:03:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-sp52,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.33.187,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4377d7377743ba64e8758a2f00cb7bc9,SystemUUID:4377d737-7743-ba64-e875-8a2f00cb7bc9,BootID:601334d8-63bd-4289-88cf-b3039f865736,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e,DevicePath:,},},Config:nil,},} Nov 25 16:04:25.990: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-sp52 Nov 25 16:04:26.043: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-sp52 Nov 25 16:04:26.108: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-sp52: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-cloud-provider-gcp] Addon update tear down framework | framework.go:193 STEP: Destroying namespace "addon-update-test-8457" for this suite. 11/25/22 16:04:26.108
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\shandle\supdates\sto\sExternalTrafficPolicy\sfield$'
test/e2e/network/loadbalancer.go:1535 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1535 +0x357from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:57:29.178 Nov 25 15:57:29.178: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 15:57:29.179 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:57:29.341 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:57:29.424 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should handle updates to ExternalTrafficPolicy field test/e2e/network/loadbalancer.go:1480 STEP: creating a service esipp-5513/external-local-update with type=LoadBalancer 11/25/22 15:57:29.643 STEP: setting ExternalTrafficPolicy=Local 11/25/22 15:57:29.643 STEP: waiting for loadbalancer for service esipp-5513/external-local-update 11/25/22 15:57:29.703 Nov 25 15:57:29.703: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer Nov 25 15:57:41.797: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.197.125.133/api/v1/namespaces/esipp-5513/services/external-local-update": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:43.796: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.197.125.133/api/v1/namespaces/esipp-5513/services/external-local-update": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:45.796: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.197.125.133/api/v1/namespaces/esipp-5513/services/external-local-update": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:47.796: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.197.125.133/api/v1/namespaces/esipp-5513/services/external-local-update": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:49.796: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.197.125.133/api/v1/namespaces/esipp-5513/services/external-local-update": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:51.796: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.197.125.133/api/v1/namespaces/esipp-5513/services/external-local-update": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:53.796: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.197.125.133/api/v1/namespaces/esipp-5513/services/external-local-update": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:55.797: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.197.125.133/api/v1/namespaces/esipp-5513/services/external-local-update": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:57.796: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.197.125.133/api/v1/namespaces/esipp-5513/services/external-local-update": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:59.795: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.197.125.133/api/v1/namespaces/esipp-5513/services/external-local-update": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:01.796: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.197.125.133/api/v1/namespaces/esipp-5513/services/external-local-update": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:03.796: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.197.125.133/api/v1/namespaces/esipp-5513/services/external-local-update": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:05.796: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.197.125.133/api/v1/namespaces/esipp-5513/services/external-local-update": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:07.796: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.197.125.133/api/v1/namespaces/esipp-5513/services/external-local-update": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:09.796: INFO: Retrying .... error trying to get Service external-local-update: Get "https://35.197.125.133/api/v1/namespaces/esipp-5513/services/external-local-update": dial tcp 35.197.125.133:443: connect: connection refused STEP: creating a pod to be part of the service external-local-update 11/25/22 15:59:35.877 Nov 25 15:59:35.952: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 15:59:36.027: INFO: Found all 1 pods Nov 25 15:59:36.027: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-update-2kgxd] Nov 25 15:59:36.027: INFO: Waiting up to 2m0s for pod "external-local-update-2kgxd" in namespace "esipp-5513" to be "running and ready" Nov 25 15:59:36.085: INFO: Pod "external-local-update-2kgxd": Phase="Pending", Reason="", readiness=false. Elapsed: 57.973378ms Nov 25 15:59:36.085: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-2kgxd' on 'bootstrap-e2e-minion-group-6gq3' to be 'Running' but was 'Pending' Nov 25 15:59:38.133: INFO: Pod "external-local-update-2kgxd": Phase="Running", Reason="", readiness=true. Elapsed: 2.105552139s Nov 25 15:59:38.133: INFO: Pod "external-local-update-2kgxd" satisfied condition "running and ready" Nov 25 15:59:38.133: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-update-2kgxd] STEP: waiting for loadbalancer for service esipp-5513/external-local-update 11/25/22 15:59:38.133 Nov 25 15:59:38.133: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer STEP: turning ESIPP off 11/25/22 15:59:38.203 Nov 25 15:59:39.555: FAIL: Expected <int>: 0 not to equal <int>: 0 Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1535 +0x357 Nov 25 15:59:39.695: INFO: Waiting up to 15m0s for service "external-local-update" to have no LoadBalancer [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 15:59:49.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 15:59:50.050: INFO: Output of kubectl describe svc: Nov 25 15:59:50.050: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=esipp-5513 describe svc --namespace=esipp-5513' Nov 25 15:59:50.600: INFO: stderr: "" Nov 25 15:59:50.600: INFO: stdout: "Name: external-local-update\nNamespace: esipp-5513\nLabels: testid=external-local-update-08e20d46-9b07-41c0-ade9-b3faff678655\nAnnotations: <none>\nSelector: testid=external-local-update-08e20d46-9b07-41c0-ade9-b3faff678655\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.29.17\nIPs: 10.0.29.17\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nEndpoints: \nSession Affinity: None\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 2m21s service-controller Ensuring load balancer\n Normal UpdatedLoadBalancer 17s service-controller Updated load balancer with new hosts\n Normal EnsuredLoadBalancer 16s service-controller Ensured load balancer\n Normal EnsuringLoadBalancer 12s (x2 over 18s) service-controller Ensuring load balancer\n Normal ExternalTrafficPolicy 12s service-controller Local -> Cluster\n Normal Type 11s service-controller LoadBalancer -> ClusterIP\n" Nov 25 15:59:50.600: INFO: Name: external-local-update Namespace: esipp-5513 Labels: testid=external-local-update-08e20d46-9b07-41c0-ade9-b3faff678655 Annotations: <none> Selector: testid=external-local-update-08e20d46-9b07-41c0-ade9-b3faff678655 Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.29.17 IPs: 10.0.29.17 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: Session Affinity: None Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 2m21s service-controller Ensuring load balancer Normal UpdatedLoadBalancer 17s service-controller Updated load balancer with new hosts Normal EnsuredLoadBalancer 16s service-controller Ensured load balancer Normal EnsuringLoadBalancer 12s (x2 over 18s) service-controller Ensuring load balancer Normal ExternalTrafficPolicy 12s service-controller Local -> Cluster Normal Type 11s service-controller LoadBalancer -> ClusterIP [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:59:50.6 STEP: Collecting events from namespace "esipp-5513". 11/25/22 15:59:50.6 STEP: Found 14 events. 11/25/22 15:59:50.685 Nov 25 15:59:50.685: INFO: At 2022-11-25 15:57:29 +0000 UTC - event for external-local-update: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 25 15:59:50.685: INFO: At 2022-11-25 15:59:32 +0000 UTC - event for external-local-update: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 25 15:59:50.685: INFO: At 2022-11-25 15:59:33 +0000 UTC - event for external-local-update: {service-controller } UpdatedLoadBalancer: Updated load balancer with new hosts Nov 25 15:59:50.685: INFO: At 2022-11-25 15:59:34 +0000 UTC - event for external-local-update: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 25 15:59:50.685: INFO: At 2022-11-25 15:59:35 +0000 UTC - event for external-local-update: {replication-controller } SuccessfulCreate: Created pod: external-local-update-2kgxd Nov 25 15:59:50.685: INFO: At 2022-11-25 15:59:35 +0000 UTC - event for external-local-update-2kgxd: {default-scheduler } Scheduled: Successfully assigned esipp-5513/external-local-update-2kgxd to bootstrap-e2e-minion-group-6gq3 Nov 25 15:59:50.685: INFO: At 2022-11-25 15:59:37 +0000 UTC - event for external-local-update-2kgxd: {kubelet bootstrap-e2e-minion-group-6gq3} Started: Started container netexec Nov 25 15:59:50.685: INFO: At 2022-11-25 15:59:37 +0000 UTC - event for external-local-update-2kgxd: {kubelet bootstrap-e2e-minion-group-6gq3} Killing: Stopping container netexec Nov 25 15:59:50.685: INFO: At 2022-11-25 15:59:37 +0000 UTC - event for external-local-update-2kgxd: {kubelet bootstrap-e2e-minion-group-6gq3} Created: Created container netexec Nov 25 15:59:50.685: INFO: At 2022-11-25 15:59:37 +0000 UTC - event for external-local-update-2kgxd: {kubelet bootstrap-e2e-minion-group-6gq3} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 15:59:50.685: INFO: At 2022-11-25 15:59:38 +0000 UTC - event for external-local-update: {service-controller } ExternalTrafficPolicy: Local -> Cluster Nov 25 15:59:50.685: INFO: At 2022-11-25 15:59:38 +0000 UTC - event for external-local-update-2kgxd: {kubelet bootstrap-e2e-minion-group-6gq3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 15:59:50.685: INFO: At 2022-11-25 15:59:39 +0000 UTC - event for external-local-update: {service-controller } Type: LoadBalancer -> ClusterIP Nov 25 15:59:50.685: INFO: At 2022-11-25 15:59:42 +0000 UTC - event for external-local-update-2kgxd: {kubelet bootstrap-e2e-minion-group-6gq3} BackOff: Back-off restarting failed container netexec in pod external-local-update-2kgxd_esipp-5513(62491e68-0673-4c65-8eba-28ea44c033bc) Nov 25 15:59:50.793: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 15:59:50.793: INFO: external-local-update-2kgxd bootstrap-e2e-minion-group-6gq3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:59:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:59:41 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:59:41 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 15:59:35 +0000 UTC }] Nov 25 15:59:50.793: INFO: Nov 25 15:59:51.030: INFO: Logging node info for node bootstrap-e2e-master Nov 25 15:59:51.083: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 9cdf5595-019f-4ae3-b78d-0ecc5e3bede9 636 0 2022-11-25 15:55:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 15:55:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:55:59 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:55:59 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:55:59 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:55:59 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.197.125.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ac2cf1840f1e13636aadd5beda3bd372,SystemUUID:ac2cf184-0f1e-1363-6aad-d5beda3bd372,BootID:561947ad-30a0-426d-bdea-6c654b08a7a1,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 15:59:51.083: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 15:59:51.217: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 15:59:51.517: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:51.517: INFO: Container kube-apiserver ready: true, restart count 1 Nov 25 15:59:51.517: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:51.517: INFO: Container etcd-container ready: true, restart count 1 Nov 25 15:59:51.517: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:51.517: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 25 15:59:51.517: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-25 15:55:12 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:51.517: INFO: Container l7-lb-controller ready: true, restart count 4 Nov 25 15:59:51.517: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:51.517: INFO: Container kube-controller-manager ready: true, restart count 4 Nov 25 15:59:51.517: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:51.517: INFO: Container kube-scheduler ready: true, restart count 2 Nov 25 15:59:51.517: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:51.517: INFO: Container etcd-container ready: true, restart count 2 Nov 25 15:59:51.517: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-25 15:55:12 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:51.517: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 25 15:59:51.517: INFO: metadata-proxy-v0.1-7q9zt started at 2022-11-25 15:55:39 +0000 UTC (0+2 container statuses recorded) Nov 25 15:59:51.517: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 15:59:51.517: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 15:59:51.923: INFO: Latency metrics for node bootstrap-e2e-master Nov 25 15:59:51.923: INFO: Logging node info for node bootstrap-e2e-minion-group-6gq3 Nov 25 15:59:51.980: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6gq3 d9dd389c-0f83-4f5d-89ae-55a80abf1a2f 1711 0 2022-11-25 15:55:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6gq3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6gq3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-9498":"bootstrap-e2e-minion-group-6gq3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-25 15:55:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 15:57:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 15:58:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-6gq3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 15:55:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 15:55:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 15:55:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 15:55:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 15:55:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 15:55:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 15:55:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:58:20 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:58:20 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:58:20 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:58:20 +0000 UTC,LastTransitionTime:2022-11-25 15:55:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.145.38.169,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a65d19069fa2a7e527b61eb4bd24dd95,SystemUUID:a65d1906-9fa2-a7e5-27b6-1eb4bd24dd95,BootID:f0b831b9-bee5-4ef5-bc7f-65152df7ae5a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be,DevicePath:,},},Config:nil,},} Nov 25 15:59:51.980: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6gq3 Nov 25 15:59:52.038: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6gq3 Nov 25 15:59:52.158: INFO: volume-snapshot-controller-0 started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Container volume-snapshot-controller ready: false, restart count 3 Nov 25 15:59:52.158: INFO: metadata-proxy-v0.1-ch4s9 started at 2022-11-25 15:55:38 +0000 UTC (0+2 container statuses recorded) Nov 25 15:59:52.158: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 15:59:52.158: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 15:59:52.158: INFO: kube-dns-autoscaler-5f6455f985-4h7dq started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Container autoscaler ready: false, restart count 3 Nov 25 15:59:52.158: INFO: hostexec-bootstrap-e2e-minion-group-6gq3-txv7z started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:59:52.158: INFO: hostexec-bootstrap-e2e-minion-group-6gq3-cvnzt started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:59:52.158: INFO: external-provisioner-l6t9c started at 2022-11-25 15:57:16 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Container nfs-provisioner ready: true, restart count 3 Nov 25 15:59:52.158: INFO: csi-hostpathplugin-0 started at 2022-11-25 15:57:17 +0000 UTC (0+7 container statuses recorded) Nov 25 15:59:52.158: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 15:59:52.158: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 15:59:52.158: INFO: Container csi-resizer ready: true, restart count 0 Nov 25 15:59:52.158: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 25 15:59:52.158: INFO: Container hostpath ready: true, restart count 0 Nov 25 15:59:52.158: INFO: Container liveness-probe ready: true, restart count 0 Nov 25 15:59:52.158: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 25 15:59:52.158: INFO: hostexec-bootstrap-e2e-minion-group-6gq3-gc2wc started at 2022-11-25 15:57:32 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:59:52.158: INFO: external-local-update-2kgxd started at 2022-11-25 15:59:35 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Container netexec ready: false, restart count 1 Nov 25 15:59:52.158: INFO: kube-proxy-bootstrap-e2e-minion-group-6gq3 started at 2022-11-25 15:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Container kube-proxy ready: true, restart count 2 Nov 25 15:59:52.158: INFO: coredns-6d97d5ddb-6vwlx started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Container coredns ready: false, restart count 4 Nov 25 15:59:52.158: INFO: l7-default-backend-8549d69d99-m478x started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Container default-http-backend ready: true, restart count 0 Nov 25 15:59:52.158: INFO: konnectivity-agent-prjfw started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Container konnectivity-agent ready: true, restart count 3 Nov 25 15:59:52.158: INFO: pod-subpath-test-dynamicpv-vf5q started at 2022-11-25 15:57:32 +0000 UTC (1+2 container statuses recorded) Nov 25 15:59:52.158: INFO: Init container init-volume-dynamicpv-vf5q ready: true, restart count 0 Nov 25 15:59:52.158: INFO: Container test-container-subpath-dynamicpv-vf5q ready: true, restart count 0 Nov 25 15:59:52.158: INFO: Container test-container-volume-dynamicpv-vf5q ready: true, restart count 0 Nov 25 15:59:52.158: INFO: pod-subpath-test-preprovisionedpv-zrc5 started at 2022-11-25 15:57:37 +0000 UTC (1+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Init container init-volume-preprovisionedpv-zrc5 ready: true, restart count 0 Nov 25 15:59:52.158: INFO: Container test-container-subpath-preprovisionedpv-zrc5 ready: false, restart count 0 Nov 25 15:59:52.158: INFO: pod-d72e7e0e-8a03-47fa-99d8-581e9d66b5d0 started at 2022-11-25 15:57:38 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:59:52.158: INFO: local-io-client started at 2022-11-25 15:57:38 +0000 UTC (1+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Init container local-io-init ready: true, restart count 0 Nov 25 15:59:52.158: INFO: Container local-io-client ready: true, restart count 0 Nov 25 15:59:52.158: INFO: pod-e7b3e21a-23a0-4c71-a7b3-cb901c2491f8 started at 2022-11-25 15:57:37 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:59:52.158: INFO: hostexec-bootstrap-e2e-minion-group-6gq3-vnjns started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.158: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 15:59:52.461: INFO: Latency metrics for node bootstrap-e2e-minion-group-6gq3 Nov 25 15:59:52.461: INFO: Logging node info for node bootstrap-e2e-minion-group-9cl6 Nov 25 15:59:52.511: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-9cl6 074fe96a-325f-4d5f-83a2-c840a04a6f6e 2848 0 2022-11-25 15:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-9cl6 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-9cl6 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8031":"bootstrap-e2e-minion-group-9cl6"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 15:55:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 15:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 15:59:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-9cl6,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 15:55:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 15:55:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 15:55:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 15:55:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 15:55:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 15:55:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 15:55:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:59:51 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:59:51 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:59:51 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:59:51 +0000 UTC,LastTransitionTime:2022-11-25 15:55:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.203.132.179,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8858ca8f7f864c182ba49f423846650c,SystemUUID:8858ca8f-7f86-4c18-2ba4-9f423846650c,BootID:fbd96363-13a6-49de-a1fa-0e73a4570da5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-8031^2d80c2fb-6cda-11ed-a8be-5a65049ea7a3 kubernetes.io/csi/csi-hostpath-multivolume-8031^2e633bf6-6cda-11ed-a8be-5a65049ea7a3],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8031^2e633bf6-6cda-11ed-a8be-5a65049ea7a3,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8031^2d80c2fb-6cda-11ed-a8be-5a65049ea7a3,DevicePath:,},},Config:nil,},} Nov 25 15:59:52.512: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-9cl6 Nov 25 15:59:52.562: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-9cl6 Nov 25 15:59:52.719: INFO: external-local-nodeport-jpcl6 started at 2022-11-25 15:59:34 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container netexec ready: true, restart count 0 Nov 25 15:59:52.719: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-sztmx started at 2022-11-25 15:59:35 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 15:59:52.719: INFO: test-hostpath-type-wxfbv started at <nil> (0+0 container statuses recorded) Nov 25 15:59:52.719: INFO: test-hostpath-type-fj9gg started at 2022-11-25 15:57:38 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container host-path-sh-testing ready: true, restart count 0 Nov 25 15:59:52.719: INFO: test-hostpath-type-mcsc8 started at 2022-11-25 15:59:34 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container host-path-testing ready: true, restart count 0 Nov 25 15:59:52.719: INFO: test-hostpath-type-7t77r started at 2022-11-25 15:59:33 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container host-path-testing ready: true, restart count 0 Nov 25 15:59:52.719: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-m4fnf started at 2022-11-25 15:59:33 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:59:52.719: INFO: csi-hostpathplugin-0 started at 2022-11-25 15:59:36 +0000 UTC (0+7 container statuses recorded) Nov 25 15:59:52.719: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 15:59:52.719: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 15:59:52.719: INFO: Container csi-resizer ready: true, restart count 0 Nov 25 15:59:52.719: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 25 15:59:52.719: INFO: Container hostpath ready: true, restart count 0 Nov 25 15:59:52.719: INFO: Container liveness-probe ready: true, restart count 0 Nov 25 15:59:52.719: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 25 15:59:52.719: INFO: pod-subpath-test-preprovisionedpv-n5fx started at <nil> (0+0 container statuses recorded) Nov 25 15:59:52.719: INFO: test-hostpath-type-m29xc started at 2022-11-25 15:59:33 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 15:59:52.719: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-kp5tg started at 2022-11-25 15:57:16 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:59:52.719: INFO: test-hostpath-type-6ljpr started at 2022-11-25 15:57:35 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container host-path-testing ready: true, restart count 0 Nov 25 15:59:52.719: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-d47k5 started at <nil> (0+0 container statuses recorded) Nov 25 15:59:52.719: INFO: test-hostpath-type-xsbzm started at <nil> (0+0 container statuses recorded) Nov 25 15:59:52.719: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-pghkd started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 15:59:52.719: INFO: test-hostpath-type-5wn6t started at 2022-11-25 15:59:33 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container host-path-testing ready: true, restart count 0 Nov 25 15:59:52.719: INFO: pod-262f38e7-b42e-4bd9-bd33-c3bf07a7d4c0 started at 2022-11-25 15:57:26 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:59:52.719: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-vlw6f started at 2022-11-25 15:57:26 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:59:52.719: INFO: local-io-client started at 2022-11-25 15:57:38 +0000 UTC (1+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Init container local-io-init ready: true, restart count 0 Nov 25 15:59:52.719: INFO: Container local-io-client ready: true, restart count 0 Nov 25 15:59:52.719: INFO: pod-subpath-test-preprovisionedpv-5vln started at 2022-11-25 15:57:38 +0000 UTC (1+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Init container init-volume-preprovisionedpv-5vln ready: true, restart count 0 Nov 25 15:59:52.719: INFO: Container test-container-subpath-preprovisionedpv-5vln ready: false, restart count 0 Nov 25 15:59:52.719: INFO: test-hostpath-type-rr45x started at 2022-11-25 15:59:33 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container host-path-sh-testing ready: true, restart count 0 Nov 25 15:59:52.719: INFO: coredns-6d97d5ddb-jlmlv started at 2022-11-25 15:55:57 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container coredns ready: true, restart count 3 Nov 25 15:59:52.719: INFO: test-hostpath-type-vftrr started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 15:59:52.719: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-zntjq started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 15:59:52.719: INFO: pod-fa04a498-f292-4897-98a1-474ecebcdb63 started at <nil> (0+0 container statuses recorded) Nov 25 15:59:52.719: INFO: test-hostpath-type-2kkb5 started at <nil> (0+0 container statuses recorded) Nov 25 15:59:52.719: INFO: pod-d1266809-560e-4dd7-b7c1-587fde9a2bf0 started at <nil> (0+0 container statuses recorded) Nov 25 15:59:52.719: INFO: kube-proxy-bootstrap-e2e-minion-group-9cl6 started at 2022-11-25 15:55:36 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container kube-proxy ready: true, restart count 2 Nov 25 15:59:52.719: INFO: metadata-proxy-v0.1-lm6hb started at 2022-11-25 15:55:37 +0000 UTC (0+2 container statuses recorded) Nov 25 15:59:52.719: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 15:59:52.719: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 15:59:52.719: INFO: pod-879bca5a-da87-481c-8825-3925192f7528 started at 2022-11-25 15:57:35 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.719: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:59:52.719: INFO: pod-2caf6bf5-96a6-4b2d-af45-97c694310c39 started at <nil> (0+0 container statuses recorded) Nov 25 15:59:52.720: INFO: pod-subpath-test-inlinevolume-26bb started at <nil> (0+0 container statuses recorded) Nov 25 15:59:52.720: INFO: pod-402e579e-bcae-493c-a043-165510eb4f39 started at <nil> (0+0 container statuses recorded) Nov 25 15:59:52.720: INFO: var-expansion-74796279-6d49-4797-96b3-e1ccd39e019f started at 2022-11-25 15:57:36 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.720: INFO: Container dapi-container ready: false, restart count 0 Nov 25 15:59:52.720: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-h4k67 started at 2022-11-25 15:59:34 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.720: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 15:59:52.720: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-tq4h5 started at 2022-11-25 15:59:34 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.720: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:59:52.720: INFO: konnectivity-agent-gwjl2 started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.720: INFO: Container konnectivity-agent ready: true, restart count 1 Nov 25 15:59:52.720: INFO: pod-1a10515f-adf0-4305-bed9-0275ef41a59c started at 2022-11-25 15:57:18 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:52.720: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:59:52.720: INFO: csi-mockplugin-0 started at 2022-11-25 15:59:36 +0000 UTC (0+4 container statuses recorded) Nov 25 15:59:52.720: INFO: Container busybox ready: false, restart count 0 Nov 25 15:59:52.720: INFO: Container csi-provisioner ready: false, restart count 0 Nov 25 15:59:52.720: INFO: Container driver-registrar ready: false, restart count 0 Nov 25 15:59:52.720: INFO: Container mock ready: false, restart count 0 Nov 25 15:59:53.005: INFO: Latency metrics for node bootstrap-e2e-minion-group-9cl6 Nov 25 15:59:53.005: INFO: Logging node info for node bootstrap-e2e-minion-group-sp52 Nov 25 15:59:53.061: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-sp52 50f2d6f8-49b3-493a-a11a-263fafdd25f0 2506 0 2022-11-25 15:55:42 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-sp52 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-sp52 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-4245":"bootstrap-e2e-minion-group-sp52","csi-hostpath-provisioning-4816":"bootstrap-e2e-minion-group-sp52"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 15:55:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 15:59:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 15:59:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-sp52,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 15:55:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 15:55:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 15:55:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 15:55:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 15:55:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 15:55:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 15:55:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:59:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:59:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:59:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:59:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.33.187,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4377d7377743ba64e8758a2f00cb7bc9,SystemUUID:4377d737-7743-ba64-e875-8a2f00cb7bc9,BootID:601334d8-63bd-4289-88cf-b3039f865736,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e,DevicePath:,},},Config:nil,},} Nov 25 15:59:53.061: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-sp52 Nov 25 15:59:53.115: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-sp52 Nov 25 15:59:53.215: INFO: metadata-proxy-v0.1-zsm52 started at 2022-11-25 15:55:43 +0000 UTC (0+2 container statuses recorded) Nov 25 15:59:53.215: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 15:59:53.215: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 15:59:53.215: INFO: konnectivity-agent-qc7wc started at 2022-11-25 15:55:55 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:53.215: INFO: Container konnectivity-agent ready: false, restart count 2 Nov 25 15:59:53.216: INFO: pod-subpath-test-dynamicpv-mdz4 started at 2022-11-25 15:57:27 +0000 UTC (1+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Init container init-volume-dynamicpv-mdz4 ready: true, restart count 0 Nov 25 15:59:53.216: INFO: Container test-container-subpath-dynamicpv-mdz4 ready: false, restart count 0 Nov 25 15:59:53.216: INFO: test-hostpath-type-vgxcp started at 2022-11-25 15:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Container host-path-testing ready: true, restart count 0 Nov 25 15:59:53.216: INFO: hostexec-bootstrap-e2e-minion-group-sp52-dthgj started at 2022-11-25 15:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:59:53.216: INFO: csi-hostpathplugin-0 started at 2022-11-25 15:57:38 +0000 UTC (0+7 container statuses recorded) Nov 25 15:59:53.216: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 15:59:53.216: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 15:59:53.216: INFO: Container csi-resizer ready: true, restart count 0 Nov 25 15:59:53.216: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 25 15:59:53.216: INFO: Container hostpath ready: true, restart count 0 Nov 25 15:59:53.216: INFO: Container liveness-probe ready: true, restart count 0 Nov 25 15:59:53.216: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 25 15:59:53.216: INFO: local-io-client started at 2022-11-25 15:59:49 +0000 UTC (1+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Init container local-io-init ready: false, restart count 0 Nov 25 15:59:53.216: INFO: Container local-io-client ready: false, restart count 0 Nov 25 15:59:53.216: INFO: pod-51619b80-f127-430b-b41a-06dfb0ba3c08 started at 2022-11-25 15:59:36 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Container write-pod ready: true, restart count 0 Nov 25 15:59:53.216: INFO: pod-subpath-test-preprovisionedpv-4xmm started at 2022-11-25 15:57:38 +0000 UTC (1+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Init container init-volume-preprovisionedpv-4xmm ready: true, restart count 0 Nov 25 15:59:53.216: INFO: Container test-container-subpath-preprovisionedpv-4xmm ready: false, restart count 0 Nov 25 15:59:53.216: INFO: csi-hostpathplugin-0 started at 2022-11-25 15:57:17 +0000 UTC (0+7 container statuses recorded) Nov 25 15:59:53.216: INFO: Container csi-attacher ready: true, restart count 1 Nov 25 15:59:53.216: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 15:59:53.216: INFO: Container csi-resizer ready: true, restart count 1 Nov 25 15:59:53.216: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 25 15:59:53.216: INFO: Container hostpath ready: true, restart count 1 Nov 25 15:59:53.216: INFO: Container liveness-probe ready: true, restart count 1 Nov 25 15:59:53.216: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 25 15:59:53.216: INFO: hostexec-bootstrap-e2e-minion-group-sp52-d2hmm started at 2022-11-25 15:59:50 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 15:59:53.216: INFO: kube-proxy-bootstrap-e2e-minion-group-sp52 started at 2022-11-25 15:55:42 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Container kube-proxy ready: true, restart count 2 Nov 25 15:59:53.216: INFO: metrics-server-v0.5.2-867b8754b9-xks4c started at 2022-11-25 15:56:07 +0000 UTC (0+2 container statuses recorded) Nov 25 15:59:53.216: INFO: Container metrics-server ready: false, restart count 3 Nov 25 15:59:53.216: INFO: Container metrics-server-nanny ready: true, restart count 3 Nov 25 15:59:53.216: INFO: hostexec-bootstrap-e2e-minion-group-sp52-qprs8 started at 2022-11-25 15:57:14 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Container agnhost-container ready: false, restart count 3 Nov 25 15:59:53.216: INFO: pod-back-off-image started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Container back-off ready: false, restart count 4 Nov 25 15:59:53.216: INFO: hostexec-bootstrap-e2e-minion-group-sp52-fjj9q started at 2022-11-25 15:57:32 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:59:53.216: INFO: pod-59bb45d4-e913-4ef6-a071-39ddc50794f2 started at 2022-11-25 15:59:46 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:59:53.216: INFO: test-hostpath-type-tcwxq started at 2022-11-25 15:57:38 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Container host-path-testing ready: true, restart count 0 Nov 25 15:59:53.216: INFO: hostexec-bootstrap-e2e-minion-group-sp52-k78dz started at 2022-11-25 15:59:33 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:59:53.216: INFO: pod-subpath-test-preprovisionedpv-6n9v started at 2022-11-25 15:57:38 +0000 UTC (1+2 container statuses recorded) Nov 25 15:59:53.216: INFO: Init container init-volume-preprovisionedpv-6n9v ready: true, restart count 0 Nov 25 15:59:53.216: INFO: Container test-container-subpath-preprovisionedpv-6n9v ready: true, restart count 0 Nov 25 15:59:53.216: INFO: Container test-container-volume-preprovisionedpv-6n9v ready: true, restart count 0 Nov 25 15:59:53.216: INFO: pod-1aeaf794-dfc5-4bf5-a5d6-a74390afdcef started at 2022-11-25 15:59:34 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Container write-pod ready: false, restart count 0 Nov 25 15:59:53.216: INFO: hostexec-bootstrap-e2e-minion-group-sp52-w2m7p started at 2022-11-25 15:59:35 +0000 UTC (0+1 container statuses recorded) Nov 25 15:59:53.216: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 15:59:53.216: INFO: hostexec-bootstrap-e2e-minion-group-sp52-xbgck started at <nil> (0+0 container statuses recorded) Nov 25 15:59:54.343: INFO: Latency metrics for node bootstrap-e2e-minion-group-sp52 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-5513" for this suite. 11/25/22 15:59:54.344
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\sonly\starget\snodes\swith\sendpoints$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d32000) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:12:01.505 Nov 25 16:12:01.505: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 16:12:01.507 Nov 25 16:12:01.546: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:03.587: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:05.586: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:07.586: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:09.586: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:11.587: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:13.587: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:15.586: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:17.586: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:19.587: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:21.586: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:23.586: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:25.586: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:27.586: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:29.587: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:31.586: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:31.625: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:31.625: INFO: Unexpected error: <*errors.errorString | 0xc000195d80>: { s: "timed out waiting for the condition", } Nov 25 16:12:31.625: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d32000) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 16:12:31.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:12:31.665 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=LoadBalancer$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001161a40) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:11:11.629 Nov 25 16:11:11.629: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 16:11:11.634 Nov 25 16:11:11.675: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:13.714: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:15.714: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:17.715: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:19.715: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:21.715: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:23.717: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:25.715: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:27.714: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:29.714: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:31.715: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:33.714: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:35.714: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:37.715: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:39.715: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:41.714: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:41.754: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:41.754: INFO: Unexpected error: <*errors.errorString | 0xc0001fd960>: { s: "timed out waiting for the condition", } Nov 25 16:11:41.754: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc001161a40) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 16:11:41.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:11:41.794 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=NodePort$'
test/e2e/framework/network/utils.go:834 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000f4c460, 0x3c?) test/e2e/framework/network/utils.go:834 +0x545 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0011c4000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 There were additional failures detected after the initial failure: [FAILED] Nov 25 16:01:00.984: failed to list events in namespace "esipp-6986": Get "https://35.197.125.133/api/v1/namespaces/esipp-6986/events": dial tcp 35.197.125.133:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 16:01:01.025: Couldn't delete ns: "esipp-6986": Delete "https://35.197.125.133/api/v1/namespaces/esipp-6986": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/esipp-6986", Err:(*net.OpError)(0xc00390f400)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:57:42.001 Nov 25 15:57:42.001: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 15:57:42.004 Nov 25 15:57:42.043: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:44.083: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:46.083: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:48.084: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:50.084: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:52.083: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:54.083: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:56.083: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:57:58.083: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:00.084: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:02.084: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:04.083: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:06.083: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:08.083: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 15:58:10.083: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:59:33.113 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:59:33.237 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should work for type=NodePort test/e2e/network/loadbalancer.go:1314 STEP: creating a service esipp-6986/external-local-nodeport with type=NodePort and ExternalTrafficPolicy=Local 11/25/22 15:59:33.752 STEP: creating a pod to be part of the service external-local-nodeport 11/25/22 15:59:34.006 Nov 25 15:59:34.119: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 15:59:34.179: INFO: Found 0/1 pods - will retry Nov 25 15:59:36.223: INFO: Found all 1 pods Nov 25 15:59:36.223: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-nodeport-jpcl6] Nov 25 15:59:36.223: INFO: Waiting up to 2m0s for pod "external-local-nodeport-jpcl6" in namespace "esipp-6986" to be "running and ready" Nov 25 15:59:36.272: INFO: Pod "external-local-nodeport-jpcl6": Phase="Pending", Reason="", readiness=false. Elapsed: 48.319587ms Nov 25 15:59:36.272: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-jpcl6' on 'bootstrap-e2e-minion-group-9cl6' to be 'Running' but was 'Pending' Nov 25 15:59:38.319: INFO: Pod "external-local-nodeport-jpcl6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095265282s Nov 25 15:59:38.319: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-jpcl6' on 'bootstrap-e2e-minion-group-9cl6' to be 'Running' but was 'Pending' Nov 25 15:59:40.342: INFO: Pod "external-local-nodeport-jpcl6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118672461s Nov 25 15:59:40.342: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-jpcl6' on 'bootstrap-e2e-minion-group-9cl6' to be 'Running' but was 'Pending' Nov 25 15:59:42.332: INFO: Pod "external-local-nodeport-jpcl6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108833209s Nov 25 15:59:42.332: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-jpcl6' on 'bootstrap-e2e-minion-group-9cl6' to be 'Running' but was 'Pending' Nov 25 15:59:44.425: INFO: Pod "external-local-nodeport-jpcl6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.20116581s Nov 25 15:59:44.425: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-jpcl6' on 'bootstrap-e2e-minion-group-9cl6' to be 'Running' but was 'Pending' Nov 25 15:59:46.343: INFO: Pod "external-local-nodeport-jpcl6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119951826s Nov 25 15:59:46.343: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-jpcl6' on 'bootstrap-e2e-minion-group-9cl6' to be 'Running' but was 'Pending' Nov 25 15:59:48.367: INFO: Pod "external-local-nodeport-jpcl6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.14307543s Nov 25 15:59:48.367: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-jpcl6' on 'bootstrap-e2e-minion-group-9cl6' to be 'Running' but was 'Pending' Nov 25 15:59:50.347: INFO: Pod "external-local-nodeport-jpcl6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.123674829s Nov 25 15:59:50.347: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodeport-jpcl6' on 'bootstrap-e2e-minion-group-9cl6' to be 'Running' but was 'Pending' Nov 25 15:59:52.329: INFO: Pod "external-local-nodeport-jpcl6": Phase="Running", Reason="", readiness=true. Elapsed: 16.106057954s Nov 25 15:59:52.330: INFO: Pod "external-local-nodeport-jpcl6" satisfied condition "running and ready" Nov 25 15:59:52.330: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-nodeport-jpcl6] STEP: Performing setup for networking test in namespace esipp-6986 11/25/22 15:59:53.446 STEP: creating a selector 11/25/22 15:59:53.446 STEP: Creating the service pods in kubernetes 11/25/22 15:59:53.446 Nov 25 15:59:53.447: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 25 15:59:53.738: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-6986" to be "running and ready" Nov 25 15:59:53.795: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 56.980496ms Nov 25 15:59:53.795: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:59:55.858: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.119818236s Nov 25 15:59:55.858: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:59:57.852: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.114404165s Nov 25 15:59:57.852: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 15:59:59.856: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.118667091s Nov 25 15:59:59.856: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 16:00:01.848: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.110611463s Nov 25 16:00:01.848: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 16:00:03.875: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.137605491s Nov 25 16:00:03.875: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 16:00:05.856: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.118573322s Nov 25 16:00:05.856: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 16:00:07.837: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.098847422s Nov 25 16:00:07.837: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 16:00:09.836: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.098297231s Nov 25 16:00:09.836: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 16:00:11.839: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.1015732s Nov 25 16:00:11.839: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 16:00:13.837: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.098884619s Nov 25 16:00:13.837: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 16:00:15.838: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.100725105s Nov 25 16:00:15.838: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 25 16:00:17.836: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 24.098712387s Nov 25 16:00:17.836: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 25 16:00:17.836: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 25 16:00:17.883: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-6986" to be "running and ready" Nov 25 16:00:17.926: INFO: Pod "netserver-1": Phase="Pending", Reason="", readiness=false. Elapsed: 43.320707ms Nov 25 16:00:17.926: INFO: The phase of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:00:19.969: INFO: Pod "netserver-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086047934s Nov 25 16:00:19.969: INFO: The phase of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:00:21.971: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.088358554s Nov 25 16:00:21.971: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 16:00:23.969: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 6.086321954s Nov 25 16:00:23.969: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 16:00:26.003: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 8.119920447s Nov 25 16:00:26.003: INFO: The phase of Pod netserver-1 is Running (Ready = false) Nov 25 16:00:27.967: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 10.084295513s Nov 25 16:00:27.967: INFO: The phase of Pod netserver-1 is Running (Ready = true) Nov 25 16:00:27.967: INFO: Pod "netserver-1" satisfied condition "running and ready" Nov 25 16:00:28.008: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "esipp-6986" to be "running and ready" Nov 25 16:00:28.050: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 41.543808ms Nov 25 16:00:28.050: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 16:00:30.092: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2.083405391s Nov 25 16:00:30.092: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 16:00:32.093: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 4.084723968s Nov 25 16:00:32.093: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 16:00:34.095: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 6.086170661s Nov 25 16:00:34.095: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 25 16:00:36.092: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 8.08388297s Nov 25 16:00:36.092: INFO: The phase of Pod netserver-2 is Running (Ready = true) Nov 25 16:00:36.092: INFO: Pod "netserver-2" satisfied condition "running and ready" STEP: Creating test pods 11/25/22 16:00:36.134 Nov 25 16:00:36.202: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "esipp-6986" to be "running" Nov 25 16:00:36.242: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 40.618886ms Nov 25 16:00:38.284: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.082093767s Nov 25 16:00:38.284: INFO: Pod "test-container-pod" satisfied condition "running" Nov 25 16:00:38.324: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 STEP: Getting node addresses 11/25/22 16:00:38.324 Nov 25 16:00:38.324: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes 11/25/22 16:00:38.408 Nov 25 16:00:38.498: INFO: Service node-port-service in namespace esipp-6986 found. Nov 25 16:00:38.642: INFO: Service session-affinity-service in namespace esipp-6986 found. STEP: Waiting for NodePort service to expose endpoint 11/25/22 16:00:38.683 Nov 25 16:00:39.683: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:40.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:41.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:42.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:43.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:44.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:45.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:46.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:47.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:48.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:49.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:50.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:51.683: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:52.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:53.683: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:54.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:55.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:56.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:57.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:58.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:00:59.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:01:00.684: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 25 16:01:00.723: INFO: Unexpected error: failed to validate endpoints for service node-port-service in namespace: esipp-6986: <*url.Error | 0xc001e32480>: { Op: "Get", URL: "https://35.197.125.133/api/v1/namespaces/esipp-6986/endpoints", Err: <*net.OpError | 0xc003723360>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0036c7530>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 197, 125, 133], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002a2f060>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 16:01:00.724: FAIL: failed to validate endpoints for service node-port-service in namespace: esipp-6986: Get "https://35.197.125.133/api/v1/namespaces/esipp-6986/endpoints": dial tcp 35.197.125.133:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000f4c460, 0x3c?) test/e2e/framework/network/utils.go:834 +0x545 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0011c4000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 Nov 25 16:01:00.763: INFO: Unexpected error: <*url.Error | 0xc001d58ba0>: { Op: "Delete", URL: "https://35.197.125.133/api/v1/namespaces/esipp-6986/services/external-local-nodeport", Err: <*net.OpError | 0xc00390f180>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001e328a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 197, 125, 133], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00133c740>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 16:01:00.763: FAIL: Delete "https://35.197.125.133/api/v1/namespaces/esipp-6986/services/external-local-nodeport": dial tcp 35.197.125.133:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.4.1() test/e2e/network/loadbalancer.go:1323 +0xe7 panic({0x70eb7e0, 0xc0007242a0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc000096380, 0xd2}, {0xc001bd7c28?, 0xc001611e80?, 0xc001bd7c50?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fadf60, 0xc001e32480}, {0xc001e324e0?, 0x75ee1b4?, 0x11?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000f4c460, 0x3c?) test/e2e/framework/network/utils.go:834 +0x545 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0011c4000, {0x0, 0x0, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.4() test/e2e/network/loadbalancer.go:1332 +0x145 [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 16:01:00.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 25 16:01:00.804: INFO: Output of kubectl describe svc: Nov 25 16:01:00.804: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=esipp-6986 describe svc --namespace=esipp-6986' Nov 25 16:01:00.944: INFO: rc: 1 Nov 25 16:01:00.944: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:01:00.944 STEP: Collecting events from namespace "esipp-6986". 11/25/22 16:01:00.944 Nov 25 16:01:00.984: INFO: Unexpected error: failed to list events in namespace "esipp-6986": <*url.Error | 0xc001e328d0>: { Op: "Get", URL: "https://35.197.125.133/api/v1/namespaces/esipp-6986/events", Err: <*net.OpError | 0xc0037234a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0036c7bf0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 197, 125, 133], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002a2f220>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 16:01:00.984: FAIL: failed to list events in namespace "esipp-6986": Get "https://35.197.125.133/api/v1/namespaces/esipp-6986/events": dial tcp 35.197.125.133:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc001bd25c0, {0xc00366e6b0, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0030dd380}, {0xc00366e6b0, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001bd2650?, {0xc00366e6b0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0011c4000) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001594b90?, 0xc0030def50?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001594b90?, 0x7fadfa0?}, {0xae73300?, 0xc0030def80?, 0x26225bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-6986" for this suite. 11/25/22 16:01:00.985 Nov 25 16:01:01.025: FAIL: Couldn't delete ns: "esipp-6986": Delete "https://35.197.125.133/api/v1/namespaces/esipp-6986": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/esipp-6986", Err:(*net.OpError)(0xc00390f400)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0011c4000) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001594a70?, 0xc0030dffb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001594a70?, 0x0?}, {0xae73300?, 0x5?, 0xc0004e63f0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfrom\spods$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011f8000) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:00:28.797 Nov 25 16:00:28.797: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/25/22 16:00:28.799 Nov 25 16:02:28.847: INFO: Unexpected error: <*fmt.wrapError | 0xc00077a3e0>: { msg: "wait for service account \"default\" in namespace \"esipp-1350\": timed out waiting for the condition", err: <*errors.errorString | 0xc000239a00>{ s: "timed out waiting for the condition", }, } Nov 25 16:02:28.847: FAIL: wait for service account "default" in namespace "esipp-1350": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011f8000) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 16:02:28.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:02:28.93 STEP: Collecting events from namespace "esipp-1350". 11/25/22 16:02:28.93 STEP: Found 0 events. 11/25/22 16:02:28.972 Nov 25 16:02:29.014: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 16:02:29.014: INFO: Nov 25 16:02:29.058: INFO: Logging node info for node bootstrap-e2e-master Nov 25 16:02:29.099: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 9cdf5595-019f-4ae3-b78d-0ecc5e3bede9 3445 0 2022-11-25 15:55:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 16:02:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.197.125.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ac2cf1840f1e13636aadd5beda3bd372,SystemUUID:ac2cf184-0f1e-1363-6aad-d5beda3bd372,BootID:561947ad-30a0-426d-bdea-6c654b08a7a1,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 16:02:29.099: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 16:02:29.143: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 16:02:29.201: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.201: INFO: Container kube-scheduler ready: true, restart count 3 Nov 25 16:02:29.201: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.201: INFO: Container etcd-container ready: true, restart count 2 Nov 25 16:02:29.201: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-25 15:55:12 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.201: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 25 16:02:29.201: INFO: metadata-proxy-v0.1-7q9zt started at 2022-11-25 15:55:39 +0000 UTC (0+2 container statuses recorded) Nov 25 16:02:29.201: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 16:02:29.201: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 16:02:29.201: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.201: INFO: Container kube-controller-manager ready: false, restart count 5 Nov 25 16:02:29.201: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.201: INFO: Container etcd-container ready: true, restart count 1 Nov 25 16:02:29.201: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.201: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 25 16:02:29.201: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-25 15:55:12 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.201: INFO: Container l7-lb-controller ready: false, restart count 4 Nov 25 16:02:29.201: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.201: INFO: Container kube-apiserver ready: true, restart count 2 Nov 25 16:02:29.395: INFO: Latency metrics for node bootstrap-e2e-master Nov 25 16:02:29.395: INFO: Logging node info for node bootstrap-e2e-minion-group-6gq3 Nov 25 16:02:29.444: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6gq3 d9dd389c-0f83-4f5d-89ae-55a80abf1a2f 3392 0 2022-11-25 15:55:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6gq3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6gq3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-9498":"bootstrap-e2e-minion-group-6gq3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 15:57:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 16:00:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {node-problem-detector Update v1 2022-11-25 16:00:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-6gq3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 15:58:20 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 15:58:20 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 15:58:20 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 15:58:20 +0000 UTC,LastTransitionTime:2022-11-25 15:55:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.145.38.169,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a65d19069fa2a7e527b61eb4bd24dd95,SystemUUID:a65d1906-9fa2-a7e5-27b6-1eb4bd24dd95,BootID:f0b831b9-bee5-4ef5-bc7f-65152df7ae5a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be,DevicePath:,},},Config:nil,},} Nov 25 16:02:29.444: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6gq3 Nov 25 16:02:29.487: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6gq3 Nov 25 16:02:29.592: INFO: pod-subpath-test-dynamicpv-vf5q started at 2022-11-25 15:57:32 +0000 UTC (1+2 container statuses recorded) Nov 25 16:02:29.592: INFO: Init container init-volume-dynamicpv-vf5q ready: true, restart count 0 Nov 25 16:02:29.592: INFO: Container test-container-subpath-dynamicpv-vf5q ready: true, restart count 0 Nov 25 16:02:29.592: INFO: Container test-container-volume-dynamicpv-vf5q ready: true, restart count 0 Nov 25 16:02:29.592: INFO: pod-subpath-test-preprovisionedpv-zrc5 started at 2022-11-25 15:57:37 +0000 UTC (1+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Init container init-volume-preprovisionedpv-zrc5 ready: true, restart count 0 Nov 25 16:02:29.592: INFO: Container test-container-subpath-preprovisionedpv-zrc5 ready: false, restart count 0 Nov 25 16:02:29.592: INFO: pod-d72e7e0e-8a03-47fa-99d8-581e9d66b5d0 started at 2022-11-25 15:57:38 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:02:29.592: INFO: local-io-client started at 2022-11-25 15:57:38 +0000 UTC (1+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Init container local-io-init ready: true, restart count 0 Nov 25 16:02:29.592: INFO: Container local-io-client ready: true, restart count 0 Nov 25 16:02:29.592: INFO: kube-proxy-bootstrap-e2e-minion-group-6gq3 started at 2022-11-25 15:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container kube-proxy ready: true, restart count 4 Nov 25 16:02:29.592: INFO: coredns-6d97d5ddb-6vwlx started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container coredns ready: true, restart count 5 Nov 25 16:02:29.592: INFO: l7-default-backend-8549d69d99-m478x started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container default-http-backend ready: true, restart count 0 Nov 25 16:02:29.592: INFO: konnectivity-agent-prjfw started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container konnectivity-agent ready: true, restart count 4 Nov 25 16:02:29.592: INFO: test-container-pod started at 2022-11-25 16:00:36 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container webserver ready: true, restart count 2 Nov 25 16:02:29.592: INFO: external-provisioner-grbwx started at 2022-11-25 16:00:07 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container nfs-provisioner ready: true, restart count 2 Nov 25 16:02:29.592: INFO: hostexec-bootstrap-e2e-minion-group-6gq3-vrd5w started at 2022-11-25 15:59:55 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 16:02:29.592: INFO: pod-e7b3e21a-23a0-4c71-a7b3-cb901c2491f8 started at 2022-11-25 15:57:37 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:02:29.592: INFO: hostexec-bootstrap-e2e-minion-group-6gq3-vnjns started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 16:02:29.592: INFO: lb-internal-cg5nh started at 2022-11-25 15:59:53 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container netexec ready: true, restart count 1 Nov 25 16:02:29.592: INFO: netserver-0 started at 2022-11-25 15:59:53 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container webserver ready: true, restart count 1 Nov 25 16:02:29.592: INFO: volume-snapshot-controller-0 started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container volume-snapshot-controller ready: true, restart count 5 Nov 25 16:02:29.592: INFO: pod-4a8fabfe-7f45-4739-819d-b5e5a02655a9 started at 2022-11-25 16:00:05 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:02:29.592: INFO: csi-hostpathplugin-0 started at 2022-11-25 15:57:17 +0000 UTC (0+7 container statuses recorded) Nov 25 16:02:29.592: INFO: Container csi-attacher ready: true, restart count 3 Nov 25 16:02:29.592: INFO: Container csi-provisioner ready: true, restart count 3 Nov 25 16:02:29.592: INFO: Container csi-resizer ready: true, restart count 3 Nov 25 16:02:29.592: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 25 16:02:29.592: INFO: Container hostpath ready: true, restart count 3 Nov 25 16:02:29.592: INFO: Container liveness-probe ready: true, restart count 3 Nov 25 16:02:29.592: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 25 16:02:29.592: INFO: hostexec-bootstrap-e2e-minion-group-6gq3-gc2wc started at 2022-11-25 15:57:32 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 16:02:29.592: INFO: pod-132e88b8-a0e7-4cb2-b169-378703bbb64b started at 2022-11-25 16:00:11 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:02:29.592: INFO: metadata-proxy-v0.1-ch4s9 started at 2022-11-25 15:55:38 +0000 UTC (0+2 container statuses recorded) Nov 25 16:02:29.592: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 16:02:29.592: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 16:02:29.592: INFO: kube-dns-autoscaler-5f6455f985-4h7dq started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container autoscaler ready: true, restart count 4 Nov 25 16:02:29.592: INFO: hostexec-bootstrap-e2e-minion-group-6gq3-txv7z started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 16:02:29.592: INFO: hostexec-bootstrap-e2e-minion-group-6gq3-cvnzt started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:29.592: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 16:02:30.272: INFO: Latency metrics for node bootstrap-e2e-minion-group-6gq3 Nov 25 16:02:30.272: INFO: Logging node info for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:02:30.463: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-9cl6 074fe96a-325f-4d5f-83a2-c840a04a6f6e 3537 0 2022-11-25 15:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-9cl6 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-9cl6 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7143":"csi-mock-csi-mock-volumes-7143"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 15:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 16:00:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:02:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-9cl6,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.203.132.179,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8858ca8f7f864c182ba49f423846650c,SystemUUID:8858ca8f-7f86-4c18-2ba4-9f423846650c,BootID:fbd96363-13a6-49de-a1fa-0e73a4570da5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8031^2e633bf6-6cda-11ed-a8be-5a65049ea7a3,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8031^2d80c2fb-6cda-11ed-a8be-5a65049ea7a3,DevicePath:,},},Config:nil,},} Nov 25 16:02:30.463: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:02:30.589: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-9cl6 Nov 25 16:02:30.973: INFO: var-expansion-74796279-6d49-4797-96b3-e1ccd39e019f started at 2022-11-25 15:57:36 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container dapi-container ready: false, restart count 0 Nov 25 16:02:30.973: INFO: konnectivity-agent-gwjl2 started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container konnectivity-agent ready: false, restart count 2 Nov 25 16:02:30.973: INFO: pod-1a10515f-adf0-4305-bed9-0275ef41a59c started at 2022-11-25 15:57:18 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:02:30.973: INFO: csi-mockplugin-0 started at 2022-11-25 15:59:36 +0000 UTC (0+4 container statuses recorded) Nov 25 16:02:30.973: INFO: Container busybox ready: true, restart count 0 Nov 25 16:02:30.973: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 16:02:30.973: INFO: Container driver-registrar ready: true, restart count 0 Nov 25 16:02:30.973: INFO: Container mock ready: true, restart count 0 Nov 25 16:02:30.973: INFO: test-hostpath-type-fj9gg started at 2022-11-25 15:57:38 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container host-path-sh-testing ready: false, restart count 0 Nov 25 16:02:30.973: INFO: test-hostpath-type-mcsc8 started at 2022-11-25 15:59:34 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 16:02:30.973: INFO: external-local-nodeport-jpcl6 started at 2022-11-25 15:59:34 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container netexec ready: false, restart count 2 Nov 25 16:02:30.973: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-sztmx started at 2022-11-25 15:59:35 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 16:02:30.973: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-kp5tg started at 2022-11-25 15:57:16 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 16:02:30.973: INFO: test-hostpath-type-6ljpr started at 2022-11-25 15:57:35 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container host-path-testing ready: true, restart count 0 Nov 25 16:02:30.973: INFO: test-hostpath-type-7t77r started at 2022-11-25 15:59:33 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 16:02:30.973: INFO: csi-hostpathplugin-0 started at 2022-11-25 15:59:36 +0000 UTC (0+7 container statuses recorded) Nov 25 16:02:30.973: INFO: Container csi-attacher ready: false, restart count 0 Nov 25 16:02:30.973: INFO: Container csi-provisioner ready: false, restart count 0 Nov 25 16:02:30.973: INFO: Container csi-resizer ready: false, restart count 0 Nov 25 16:02:30.973: INFO: Container csi-snapshotter ready: false, restart count 0 Nov 25 16:02:30.973: INFO: Container hostpath ready: false, restart count 0 Nov 25 16:02:30.973: INFO: Container liveness-probe ready: false, restart count 0 Nov 25 16:02:30.973: INFO: Container node-driver-registrar ready: false, restart count 0 Nov 25 16:02:30.973: INFO: pod-subpath-test-preprovisionedpv-n5fx started at 2022-11-25 15:59:50 +0000 UTC (1+2 container statuses recorded) Nov 25 16:02:30.973: INFO: Init container init-volume-preprovisionedpv-n5fx ready: true, restart count 2 Nov 25 16:02:30.973: INFO: Container test-container-subpath-preprovisionedpv-n5fx ready: true, restart count 3 Nov 25 16:02:30.973: INFO: Container test-container-volume-preprovisionedpv-n5fx ready: true, restart count 2 Nov 25 16:02:30.973: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-d47k5 started at 2022-11-25 15:59:51 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 16:02:30.973: INFO: test-hostpath-type-n66rt started at 2022-11-25 16:00:00 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 16:02:30.973: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-pghkd started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container agnhost-container ready: false, restart count 2 Nov 25 16:02:30.973: INFO: pod-262f38e7-b42e-4bd9-bd33-c3bf07a7d4c0 started at 2022-11-25 15:57:26 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:02:30.973: INFO: test-hostpath-type-xsbzm started at 2022-11-25 15:59:46 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 16:02:30.973: INFO: netserver-1 started at 2022-11-25 15:59:53 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container webserver ready: true, restart count 1 Nov 25 16:02:30.973: INFO: coredns-6d97d5ddb-jlmlv started at 2022-11-25 15:55:57 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container coredns ready: false, restart count 4 Nov 25 16:02:30.973: INFO: test-hostpath-type-vftrr started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 16:02:30.973: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-zntjq started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 16:02:30.973: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-vlw6f started at 2022-11-25 15:57:26 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 16:02:30.973: INFO: local-io-client started at 2022-11-25 15:57:38 +0000 UTC (1+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Init container local-io-init ready: true, restart count 0 Nov 25 16:02:30.973: INFO: Container local-io-client ready: false, restart count 0 Nov 25 16:02:30.973: INFO: pod-subpath-test-preprovisionedpv-5vln started at 2022-11-25 15:57:38 +0000 UTC (1+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Init container init-volume-preprovisionedpv-5vln ready: true, restart count 0 Nov 25 16:02:30.973: INFO: Container test-container-subpath-preprovisionedpv-5vln ready: false, restart count 0 Nov 25 16:02:30.973: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-ct4px started at 2022-11-25 15:59:55 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container agnhost-container ready: true, restart count 0 Nov 25 16:02:30.973: INFO: kube-proxy-bootstrap-e2e-minion-group-9cl6 started at 2022-11-25 15:55:36 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container kube-proxy ready: false, restart count 4 Nov 25 16:02:30.973: INFO: metadata-proxy-v0.1-lm6hb started at 2022-11-25 15:55:37 +0000 UTC (0+2 container statuses recorded) Nov 25 16:02:30.973: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 16:02:30.973: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 16:02:30.973: INFO: pod-879bca5a-da87-481c-8825-3925192f7528 started at 2022-11-25 15:57:35 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:30.973: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:02:31.719: INFO: Latency metrics for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:02:31.719: INFO: Logging node info for node bootstrap-e2e-minion-group-sp52 Nov 25 16:02:31.761: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-sp52 50f2d6f8-49b3-493a-a11a-263fafdd25f0 3515 0 2022-11-25 15:55:42 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-sp52 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-sp52 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-4245":"bootstrap-e2e-minion-group-sp52","csi-hostpath-provisioning-4816":"bootstrap-e2e-minion-group-sp52"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 15:59:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 16:00:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:02:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-sp52,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.33.187,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4377d7377743ba64e8758a2f00cb7bc9,SystemUUID:4377d737-7743-ba64-e875-8a2f00cb7bc9,BootID:601334d8-63bd-4289-88cf-b3039f865736,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e,DevicePath:,},},Config:nil,},} Nov 25 16:02:31.761: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-sp52 Nov 25 16:02:31.806: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-sp52 Nov 25 16:02:31.870: INFO: test-hostpath-type-tcwxq started at 2022-11-25 15:57:38 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:31.870: INFO: Container host-path-testing ready: true, restart count 0 Nov 25 16:02:31.870: INFO: pod-subpath-test-preprovisionedpv-6n9v started at 2022-11-25 15:57:38 +0000 UTC (1+2 container statuses recorded) Nov 25 16:02:31.870: INFO: Init container init-volume-preprovisionedpv-6n9v ready: true, restart count 0 Nov 25 16:02:31.870: INFO: Container test-container-subpath-preprovisionedpv-6n9v ready: true, restart count 0 Nov 25 16:02:31.870: INFO: Container test-container-volume-preprovisionedpv-6n9v ready: true, restart count 0 Nov 25 16:02:31.870: INFO: pod-1aeaf794-dfc5-4bf5-a5d6-a74390afdcef started at 2022-11-25 15:59:34 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:31.870: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:02:31.870: INFO: hostexec-bootstrap-e2e-minion-group-sp52-w2m7p started at 2022-11-25 15:59:35 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:31.870: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 16:02:31.870: INFO: netserver-2 started at 2022-11-25 15:59:53 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:31.870: INFO: Container webserver ready: true, restart count 3 Nov 25 16:02:31.870: INFO: csi-hostpathplugin-0 started at 2022-11-25 15:57:38 +0000 UTC (0+7 container statuses recorded) Nov 25 16:02:31.870: INFO: Container csi-attacher ready: true, restart count 3 Nov 25 16:02:31.870: INFO: Container csi-provisioner ready: true, restart count 3 Nov 25 16:02:31.870: INFO: Container csi-resizer ready: true, restart count 3 Nov 25 16:02:31.870: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 25 16:02:31.870: INFO: Container hostpath ready: true, restart count 3 Nov 25 16:02:31.870: INFO: Container liveness-probe ready: true, restart count 3 Nov 25 16:02:31.870: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 25 16:02:31.870: INFO: metadata-proxy-v0.1-zsm52 started at 2022-11-25 15:55:43 +0000 UTC (0+2 container statuses recorded) Nov 25 16:02:31.870: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 16:02:31.870: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 16:02:31.870: INFO: konnectivity-agent-qc7wc started at 2022-11-25 15:55:55 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:31.870: INFO: Container konnectivity-agent ready: true, restart count 4 Nov 25 16:02:31.870: INFO: pod-subpath-test-dynamicpv-mdz4 started at 2022-11-25 15:57:27 +0000 UTC (1+1 container statuses recorded) Nov 25 16:02:31.870: INFO: Init container init-volume-dynamicpv-mdz4 ready: true, restart count 0 Nov 25 16:02:31.870: INFO: Container test-container-subpath-dynamicpv-mdz4 ready: false, restart count 0 Nov 25 16:02:31.870: INFO: test-hostpath-type-vgxcp started at 2022-11-25 15:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:31.870: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 16:02:31.870: INFO: hostexec-bootstrap-e2e-minion-group-sp52-dthgj started at 2022-11-25 15:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:31.870: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 16:02:31.870: INFO: pod-e1229de5-2e40-4cb2-b4e6-1393252d48bc started at 2022-11-25 16:00:02 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:31.870: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:02:31.870: INFO: pod-subpath-test-preprovisionedpv-4xmm started at 2022-11-25 15:57:38 +0000 UTC (1+1 container statuses recorded) Nov 25 16:02:31.870: INFO: Init container init-volume-preprovisionedpv-4xmm ready: true, restart count 0 Nov 25 16:02:31.870: INFO: Container test-container-subpath-preprovisionedpv-4xmm ready: false, restart count 0 Nov 25 16:02:31.870: INFO: csi-hostpathplugin-0 started at 2022-11-25 15:57:17 +0000 UTC (0+7 container statuses recorded) Nov 25 16:02:31.870: INFO: Container csi-attacher ready: true, restart count 3 Nov 25 16:02:31.870: INFO: Container csi-provisioner ready: true, restart count 3 Nov 25 16:02:31.870: INFO: Container csi-resizer ready: true, restart count 3 Nov 25 16:02:31.870: INFO: Container csi-snapshotter ready: true, restart count 3 Nov 25 16:02:31.870: INFO: Container hostpath ready: true, restart count 3 Nov 25 16:02:31.870: INFO: Container liveness-probe ready: true, restart count 3 Nov 25 16:02:31.870: INFO: Container node-driver-registrar ready: true, restart count 3 Nov 25 16:02:31.870: INFO: pod-59bb45d4-e913-4ef6-a071-39ddc50794f2 started at 2022-11-25 15:59:46 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:31.870: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:02:31.870: INFO: kube-proxy-bootstrap-e2e-minion-group-sp52 started at 2022-11-25 15:55:42 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:31.870: INFO: Container kube-proxy ready: true, restart count 3 Nov 25 16:02:31.870: INFO: metrics-server-v0.5.2-867b8754b9-xks4c started at 2022-11-25 15:56:07 +0000 UTC (0+2 container statuses recorded) Nov 25 16:02:31.870: INFO: Container metrics-server ready: false, restart count 4 Nov 25 16:02:31.870: INFO: Container metrics-server-nanny ready: false, restart count 5 Nov 25 16:02:31.870: INFO: hostexec-bootstrap-e2e-minion-group-sp52-qprs8 started at 2022-11-25 15:57:14 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:31.870: INFO: Container agnhost-container ready: true, restart count 4 Nov 25 16:02:31.870: INFO: pod-back-off-image started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 16:02:31.870: INFO: Container back-off ready: false, restart count 8 Nov 25 16:02:32.173: INFO: Latency metrics for node bootstrap-e2e-minion-group-sp52 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-1350" for this suite. 11/25/22 16:02:32.173
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sTCP\sservice\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0012c5950) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:14:28.578 Nov 25 16:14:28.579: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 16:14:28.581 Nov 25 16:14:28.620: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:30.661: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:32.660: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:34.661: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:36.660: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:38.660: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:40.660: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:42.660: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:44.660: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:46.660: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:48.660: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:50.660: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:52.661: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:54.660: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:56.660: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:58.660: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:58.700: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:14:58.700: INFO: Unexpected error: <*errors.errorString | 0xc00017da30>: { s: "timed out waiting for the condition", } Nov 25 16:14:58.700: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0012c5950) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 16:14:58.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:14:58.741 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\schange\sthe\stype\sand\sports\sof\sa\sUDP\sservice\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d324b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113 ---------- [FAILED] Nov 25 16:15:20.979: failed to list events in namespace "loadbalancers-8407": Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-8407/events": dial tcp 35.197.125.133:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 16:15:21.019: Couldn't delete ns: "loadbalancers-8407": Delete "https://35.197.125.133/api/v1/namespaces/loadbalancers-8407": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/loadbalancers-8407", Err:(*net.OpError)(0xc00206c000)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:12:31.673 Nov 25 16:12:31.674: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 16:12:31.675 Nov 25 16:12:31.714: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:33.754: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:35.754: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:37.755: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:39.754: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:41.755: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:43.754: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:45.755: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:47.754: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:49.755: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:51.754: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:53.754: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:20.898: INFO: Unexpected error: <*fmt.wrapError | 0xc0048d0020>: { msg: "wait for service account \"default\" in namespace \"loadbalancers-8407\": timed out waiting for the condition", err: <*errors.errorString | 0xc000195d80>{ s: "timed out waiting for the condition", }, } Nov 25 16:15:20.899: FAIL: wait for service account "default" in namespace "loadbalancers-8407": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d324b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 16:15:20.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:15:20.939 STEP: Collecting events from namespace "loadbalancers-8407". 11/25/22 16:15:20.939 Nov 25 16:15:20.979: INFO: Unexpected error: failed to list events in namespace "loadbalancers-8407": <*url.Error | 0xc003a18000>: { Op: "Get", URL: "https://35.197.125.133/api/v1/namespaces/loadbalancers-8407/events", Err: <*net.OpError | 0xc003568d70>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002234a20>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 197, 125, 133], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003d32000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 16:15:20.979: FAIL: failed to list events in namespace "loadbalancers-8407": Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-8407/events": dial tcp 35.197.125.133:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00138a5c0, {0xc003d46558, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002214000}, {0xc003d46558, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00138a650?, {0xc003d46558?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000d324b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0002ee360?, 0xc005078fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc00127af28?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0002ee360?, 0x29449fc?}, {0xae73300?, 0xc005078f80?, 0x789a580?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-8407" for this suite. 11/25/22 16:15:20.979 Nov 25 16:15:21.019: FAIL: Couldn't delete ns: "loadbalancers-8407": Delete "https://35.197.125.133/api/v1/namespaces/loadbalancers-8407": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/loadbalancers-8407", Err:(*net.OpError)(0xc00206c000)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000d324b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0002ee250?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0002ee250?, 0x4?}, {0xae73300?, 0xc0005569f0?, 0x66e0100?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\screate\sLoadBalancer\sService\swithout\sNodePort\sand\schange\sit\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000dbe4b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:15:43.044 Nov 25 16:15:43.044: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 16:15:43.045 Nov 25 16:15:43.084: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:45.124: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:47.124: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:49.124: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:51.124: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:53.124: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:55.124: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:57.124: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:59.124: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:01.124: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:03.124: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:05.124: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:07.125: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:09.125: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:11.124: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:13.124: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:13.164: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:13.164: INFO: Unexpected error: <*errors.errorString | 0xc0001fda30>: { s: "timed out waiting for the condition", } Nov 25 16:16:13.164: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000dbe4b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 16:16:13.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:16:13.204 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\screate\san\sinternal\stype\sload\sbalancer\s\[Slow\]$'
test/e2e/network/loadbalancer.go:655 k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:655 +0x832 There were additional failures detected after the initial failure: [FAILED] Nov 25 16:11:09.719: failed to list events in namespace "loadbalancers-2210": Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/events": dial tcp 35.197.125.133:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 16:11:09.759: Couldn't delete ns: "loadbalancers-2210": Delete "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/loadbalancers-2210", Err:(*net.OpError)(0xc0047bec30)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:59:52.49 Nov 25 15:59:52.490: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 15:59:52.493 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:59:52.671 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:59:52.759 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to create an internal type load balancer [Slow] test/e2e/network/loadbalancer.go:571 STEP: creating pod to be part of service lb-internal 11/25/22 15:59:52.947 Nov 25 15:59:53.019: INFO: Waiting up to 2m0s for 1 pods to be created Nov 25 15:59:53.081: INFO: Found all 1 pods Nov 25 15:59:53.081: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [lb-internal-cg5nh] Nov 25 15:59:53.081: INFO: Waiting up to 2m0s for pod "lb-internal-cg5nh" in namespace "loadbalancers-2210" to be "running and ready" Nov 25 15:59:53.158: INFO: Pod "lb-internal-cg5nh": Phase="Pending", Reason="", readiness=false. Elapsed: 77.242222ms Nov 25 15:59:53.159: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-cg5nh' on 'bootstrap-e2e-minion-group-6gq3' to be 'Running' but was 'Pending' Nov 25 15:59:55.207: INFO: Pod "lb-internal-cg5nh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125324142s Nov 25 15:59:55.207: INFO: Error evaluating pod condition running and ready: want pod 'lb-internal-cg5nh' on 'bootstrap-e2e-minion-group-6gq3' to be 'Running' but was 'Pending' Nov 25 15:59:57.274: INFO: Pod "lb-internal-cg5nh": Phase="Running", Reason="", readiness=true. Elapsed: 4.193205224s Nov 25 15:59:57.274: INFO: Pod "lb-internal-cg5nh" satisfied condition "running and ready" Nov 25 15:59:57.275: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [lb-internal-cg5nh] STEP: creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled 11/25/22 15:59:57.275 Nov 25 15:59:57.376: INFO: Waiting up to 15m0s for service "lb-internal" to have a LoadBalancer Nov 25 16:01:01.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:03.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:05.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:07.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:09.464: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:11.464: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:13.464: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:15.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:17.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:19.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:21.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:23.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:25.464: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:27.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:29.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:31.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:33.469: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:35.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:37.464: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:39.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:41.464: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:43.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:45.464: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:47.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:49.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:51.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:53.464: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:55.464: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:57.464: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:59.465: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:02:01.464: INFO: Retrying .... error trying to get Service lb-internal: Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/services/lb-internal": dial tcp 35.197.125.133:443: connect: connection refused ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 5m0.41s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 4m55.626s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0007fb080, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0010a29c0?, 0xc0024e3b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc0001ca600?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00362df40, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00362df40, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 5m20.412s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 5m20.003s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 5m15.628s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0007fb080, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0010a29c0?, 0xc0024e3b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc0001ca600?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00362df40, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00362df40, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 5m40.416s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 5m40.007s) test/e2e/network/loadbalancer.go:571 At [By Step] creating a service with type LoadBalancer and cloud specific Internal-LB annotation enabled (Step Runtime: 5m35.632s) test/e2e/network/loadbalancer.go:593 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0007fb080, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x30?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0xc0010a29c0?, 0xc0024e3b80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0x7fa7740?, 0xc0001ca600?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).waitForCondition(0xc00362df40, 0x4?, {0x7600fe2, 0x14}, 0x7895b68) test/e2e/framework/service/jig.go:631 k8s.io/kubernetes/test/e2e/framework/service.(*TestJig).WaitForLoadBalancer(0xc00362df40, 0x0?) test/e2e/framework/service/jig.go:582 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:605 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ STEP: hitting the internal load balancer from pod 11/25/22 16:05:37.477 Nov 25 16:05:37.477: INFO: creating pod with host network Nov 25 16:05:37.477: INFO: Creating new host exec pod Nov 25 16:05:37.598: INFO: Waiting up to 5m0s for pod "ilb-host-exec" in namespace "loadbalancers-2210" to be "running and ready" Nov 25 16:05:37.668: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 69.318142ms Nov 25 16:05:37.668: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:39.729: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130043803s Nov 25 16:05:39.729: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:41.720: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121387709s Nov 25 16:05:41.720: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:43.725: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12680165s Nov 25 16:05:43.725: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:45.761: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162846852s Nov 25 16:05:45.761: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:47.722: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 10.123774078s Nov 25 16:05:47.722: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:49.739: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 12.140877165s Nov 25 16:05:49.739: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:51.793: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 14.19497004s Nov 25 16:05:51.793: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 6m0.418s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 6m0.009s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 15.431s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc003754eb8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xa8?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3af8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc0018156c0}, {0xc0037541c8, 0x12}, {0x75d7031, 0xd}, {0x75ee704, 0x11}, 0xc000e45ea0?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc0018156c0?}, {0x75d7031?, 0x0?}, {0xc0037541c8?, 0x0?}, 0x0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/network.launchHostExecPod({0x801de88, 0xc0018156c0}, {0xc0037541c8, 0x12}, {0x75d7031, 0xd}) test/e2e/network/service.go:4057 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:618 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:05:53.738: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 16.139168346s Nov 25 16:05:53.738: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:55.729: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 18.130727362s Nov 25 16:05:55.729: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:57.722: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 20.123890527s Nov 25 16:05:57.722: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:59.721: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 22.123030904s Nov 25 16:05:59.722: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:01.717: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 24.118270107s Nov 25 16:06:01.717: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:03.742: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 26.143820941s Nov 25 16:06:03.742: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:05.865: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 28.266801977s Nov 25 16:06:05.865: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:07.738: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 30.139311178s Nov 25 16:06:07.738: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:09.725: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 32.126693699s Nov 25 16:06:09.725: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:11.725: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 34.126596854s Nov 25 16:06:11.725: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 6m20.421s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 6m20.012s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 35.434s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc003754eb8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xa8?, 0x2fd9d05?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3af8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76f3c92?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x801de88?, 0xc0018156c0}, {0xc0037541c8, 0x12}, {0x75d7031, 0xd}, {0x75ee704, 0x11}, 0xc000e45ea0?, 0x7895ad0) test/e2e/framework/pod/wait.go:290 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x801de88?, 0xc0018156c0?}, {0x75d7031?, 0x0?}, {0xc0037541c8?, 0x0?}, 0x0?) test/e2e/framework/pod/wait.go:564 > k8s.io/kubernetes/test/e2e/network.launchHostExecPod({0x801de88, 0xc0018156c0}, {0xc0037541c8, 0x12}, {0x75d7031, 0xd}) test/e2e/network/service.go:4057 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:618 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:06:13.721: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 36.122365677s Nov 25 16:06:13.721: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:15.803: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 38.204503422s Nov 25 16:06:15.803: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:17.738: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 40.139057337s Nov 25 16:06:17.738: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:19.743: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 42.144899763s Nov 25 16:06:19.743: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:21.731: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 44.132838824s Nov 25 16:06:21.731: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:23.737: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 46.138386508s Nov 25 16:06:23.737: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:25.749: INFO: Pod "ilb-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 48.150466986s Nov 25 16:06:25.749: INFO: The phase of Pod ilb-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:27.757: INFO: Pod "ilb-host-exec": Phase="Running", Reason="", readiness=true. Elapsed: 50.158646028s Nov 25 16:06:27.757: INFO: The phase of Pod ilb-host-exec is Running (Ready = true) Nov 25 16:06:27.757: INFO: Pod "ilb-host-exec" satisfied condition "running and ready" Nov 25 16:06:27.757: INFO: Waiting up to 15m0s for service "lb-internal"'s internal LB to respond to requests Nov 25 16:06:27.757: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-2210 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 16:06:28.195: INFO: rc: 1 Nov 25 16:06:28.195: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-2210 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 6m40.423s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 6m40.014s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 55.436s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0024d7ec0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:06:48.195: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-2210 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 16:06:48.968: INFO: rc: 1 Nov 25 16:06:48.968: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-2210 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 7m0.426s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 7m0.017s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 1m15.439s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0024d7ec0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:07:08.195: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-2210 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 16:07:08.725: INFO: rc: 1 Nov 25 16:07:08.725: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-2210 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 7m20.431s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 7m20.021s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 1m35.444s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0024d7ec0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:07:28.196: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-2210 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 16:07:28.742: INFO: rc: 1 Nov 25 16:07:28.742: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-2210 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 7m40.434s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 7m40.025s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 1m55.447s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0024d7ec0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:07:48.195: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-2210 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 16:07:48.783: INFO: rc: 1 Nov 25 16:07:48.783: INFO: error curling; stdout: . err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-2210 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello': Command stdout: stderr: error: unable to upgrade connection: container not found ("agnhost-container") error: exit status 1 ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 8m0.436s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 8m0.027s) test/e2e/network/loadbalancer.go:571 At [By Step] hitting the internal load balancer from pod (Step Runtime: 2m15.449s) test/e2e/network/loadbalancer.go:616 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc0024d7ec0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77554e9?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:622 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:08:08.195: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-2210 exec ilb-host-exec -- /bin/sh -x -c curl -m 5 'http://10.138.0.6:80/echo?msg=hello'' Nov 25 16:08:09.040: INFO: stderr: "+ curl -m 5 'http://10.138.0.6:80/echo?msg=hello'\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 5 100 5 0 0 1926 0 --:--:-- --:--:-- --:--:-- 2500\n" Nov 25 16:08:09.040: INFO: stdout: "hello" Nov 25 16:08:09.040: INFO: Successful curl; stdout: hello STEP: switching to external type LoadBalancer 11/25/22 16:08:09.041 Nov 25 16:08:09.360: INFO: Waiting up to 15m0s for service "lb-internal" to have an external LoadBalancer ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 8m20.439s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 8m20.029s) test/e2e/network/loadbalancer.go:571 At [By Step] switching to external type LoadBalancer (Step Runtime: 3.888s) test/e2e/network/loadbalancer.go:641 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc003988480, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77433b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:647 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 8m40.441s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 8m40.032s) test/e2e/network/loadbalancer.go:571 At [By Step] switching to external type LoadBalancer (Step Runtime: 23.891s) test/e2e/network/loadbalancer.go:641 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc003988480, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77433b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:647 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 9m0.444s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 9m0.034s) test/e2e/network/loadbalancer.go:571 At [By Step] switching to external type LoadBalancer (Step Runtime: 43.893s) test/e2e/network/loadbalancer.go:641 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc003988480, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77433b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:647 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 9m20.446s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 9m20.036s) test/e2e/network/loadbalancer.go:571 At [By Step] switching to external type LoadBalancer (Step Runtime: 1m3.895s) test/e2e/network/loadbalancer.go:641 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc003988480, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77433b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:647 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 9m40.448s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 9m40.039s) test/e2e/network/loadbalancer.go:571 At [By Step] switching to external type LoadBalancer (Step Runtime: 1m23.898s) test/e2e/network/loadbalancer.go:641 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc003988480, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77433b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:647 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 10m0.451s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 10m0.042s) test/e2e/network/loadbalancer.go:571 At [By Step] switching to external type LoadBalancer (Step Runtime: 1m43.9s) test/e2e/network/loadbalancer.go:641 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc003988480, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77433b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:647 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 10m20.453s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 10m20.044s) test/e2e/network/loadbalancer.go:571 At [By Step] switching to external type LoadBalancer (Step Runtime: 2m3.903s) test/e2e/network/loadbalancer.go:641 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc003988480, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77433b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:647 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 10m40.455s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 10m40.046s) test/e2e/network/loadbalancer.go:571 At [By Step] switching to external type LoadBalancer (Step Runtime: 2m23.904s) test/e2e/network/loadbalancer.go:641 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc003988480, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77433b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:647 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #22 Automatically polling progress: [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow] (Spec Runtime: 11m0.457s) test/e2e/network/loadbalancer.go:571 In [It] (Node Runtime: 11m0.048s) test/e2e/network/loadbalancer.go:571 At [By Step] switching to external type LoadBalancer (Step Runtime: 2m43.907s) test/e2e/network/loadbalancer.go:641 Spec Goroutine goroutine 662 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc003988480, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0xb8?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x75b521a?, 0xc0024e3d08?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x77433b6?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:647 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc0035cfe00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:11:09.485: FAIL: Loadbalancer IP not changed to external. Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:655 +0x832 STEP: Clean up loadbalancer service 11/25/22 16:11:09.485 STEP: Delete service with finalizer 11/25/22 16:11:09.485 Nov 25 16:11:09.525: FAIL: Failed to delete service loadbalancers-2210/lb-internal Full Stack Trace k8s.io/kubernetes/test/e2e/framework/service.WaitForServiceDeletedWithFinalizer({0x801de88, 0xc0018156c0}, {0xc003988438, 0x12}, {0xc0016ec130, 0xb}) test/e2e/framework/service/wait.go:37 +0x185 k8s.io/kubernetes/test/e2e/network.glob..func19.6.3() test/e2e/network/loadbalancer.go:602 +0x67 panic({0x70eb7e0, 0xc000794700}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Failf({0x7695064?, 0x4?}, {0x0?, 0x40?, 0xc0024e3f20?}) test/e2e/framework/log.go:49 +0x12c k8s.io/kubernetes/test/e2e/network.glob..func19.6() test/e2e/network/loadbalancer.go:655 +0x832 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 16:11:09.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 16:11:09.565: INFO: Output of kubectl describe svc: Nov 25 16:11:09.565: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-2210 describe svc --namespace=loadbalancers-2210' Nov 25 16:11:09.679: INFO: rc: 1 Nov 25 16:11:09.679: INFO: [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:11:09.679 STEP: Collecting events from namespace "loadbalancers-2210". 11/25/22 16:11:09.679 Nov 25 16:11:09.719: INFO: Unexpected error: failed to list events in namespace "loadbalancers-2210": <*url.Error | 0xc00471a900>: { Op: "Get", URL: "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/events", Err: <*net.OpError | 0xc0047be910>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0033a11d0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 197, 125, 133], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000fca620>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 16:11:09.719: FAIL: failed to list events in namespace "loadbalancers-2210": Get "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210/events": dial tcp 35.197.125.133:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0010f85c0, {0xc0037541c8, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0018156c0}, {0xc0037541c8, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0010f8650?, {0xc0037541c8?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0011f73b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc000f03830?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000f03830?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-2210" for this suite. 11/25/22 16:11:09.719 Nov 25 16:11:09.759: FAIL: Couldn't delete ns: "loadbalancers-2210": Delete "https://35.197.125.133/api/v1/namespaces/loadbalancers-2210": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/loadbalancers-2210", Err:(*net.OpError)(0xc0047bec30)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0011f73b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc000f03760?, 0x9?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x9?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc000f03760?, 0xc00234f500?}, {0xae73300?, 0x9?, 0x1?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\shandle\sload\sbalancer\scleanup\sfinalizer\sfor\sservice\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000cc44b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:12:01.536 Nov 25 16:12:01.536: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 16:12:01.538 Nov 25 16:12:01.577: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:03.617: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:05.617: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:07.617: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:09.617: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:11.617: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:13.617: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:15.617: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:17.617: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:19.617: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:21.617: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:23.618: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:25.617: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:27.618: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:29.618: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:31.617: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:31.657: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:31.657: INFO: Unexpected error: <*errors.errorString | 0xc0001c9a00>: { s: "timed out waiting for the condition", } Nov 25 16:12:31.657: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000cc44b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 16:12:31.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:12:31.698 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\shave\ssession\saffinity\swork\sfor\sLoadBalancer\sservice\swith\sESIPP\soff\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/network/service.go:3978 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x75cdc0f?, {0x801de88, 0xc002f351e0}, 0xc002172500, 0x0) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBService(...) test/e2e/network/service.go:3966 k8s.io/kubernetes/test/e2e/network.glob..func19.10() test/e2e/network/loadbalancer.go:798 +0xf0from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:04:56.995 Nov 25 16:04:56.995: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 16:04:56.997 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 16:04:57.266 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 16:04:57.362 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should have session affinity work for LoadBalancer service with ESIPP off [Slow] [LinuxOnly] test/e2e/network/loadbalancer.go:791 STEP: creating service in namespace loadbalancers-9263 11/25/22 16:04:57.561 STEP: creating service affinity-lb in namespace loadbalancers-9263 11/25/22 16:04:57.561 STEP: creating replication controller affinity-lb in namespace loadbalancers-9263 11/25/22 16:04:57.666 I1125 16:04:57.724466 10133 runners.go:193] Created replication controller with name: affinity-lb, namespace: loadbalancers-9263, replica count: 3 I1125 16:05:00.826258 10133 runners.go:193] affinity-lb Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 16:05:03.827092 10133 runners.go:193] affinity-lb Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 16:05:06.827466 10133 runners.go:193] affinity-lb Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1125 16:05:06.827494 10133 runners.go:193] Logging node info for node bootstrap-e2e-minion-group-9cl6 I1125 16:05:06.892580 10133 runners.go:193] Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-9cl6 074fe96a-325f-4d5f-83a2-c840a04a6f6e 4521 0 2022-11-25 15:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-9cl6 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-9cl6 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8031":"bootstrap-e2e-minion-group-9cl6","csi-mock-csi-mock-volumes-5257":"bootstrap-e2e-minion-group-9cl6"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 16:00:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 16:04:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-25 16:04:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-9cl6,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:04:58 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:04:58 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:04:58 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:04:58 +0000 UTC,LastTransitionTime:2022-11-25 15:55:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.203.132.179,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8858ca8f7f864c182ba49f423846650c,SystemUUID:8858ca8f-7f86-4c18-2ba4-9f423846650c,BootID:fbd96363-13a6-49de-a1fa-0e73a4570da5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-5257^e62020bd-6cda-11ed-90d6-36bfa29f10a9],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5257^e62020bd-6cda-11ed-90d6-36bfa29f10a9,DevicePath:,},},Config:nil,},} I1125 16:05:06.893050 10133 runners.go:193] Logging kubelet events for node bootstrap-e2e-minion-group-9cl6 I1125 16:05:06.952594 10133 runners.go:193] Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-9cl6 I1125 16:05:07.160132 10133 runners.go:193] hostexec-bootstrap-e2e-minion-group-9cl6-54q8r started at 2022-11-25 16:04:56 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160161 10133 runners.go:193] Container agnhost-container ready: true, restart count 0 I1125 16:05:07.160166 10133 runners.go:193] hostexec-bootstrap-e2e-minion-group-9cl6-kp5tg started at 2022-11-25 15:57:16 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160173 10133 runners.go:193] Container agnhost-container ready: true, restart count 1 I1125 16:05:07.160177 10133 runners.go:193] test-hostpath-type-6ljpr started at 2022-11-25 15:57:35 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160183 10133 runners.go:193] Container host-path-testing ready: false, restart count 0 I1125 16:05:07.160188 10133 runners.go:193] csi-hostpathplugin-0 started at 2022-11-25 15:59:36 +0000 UTC (0+7 container statuses recorded) I1125 16:05:07.160196 10133 runners.go:193] Container csi-attacher ready: true, restart count 1 I1125 16:05:07.160202 10133 runners.go:193] Container csi-provisioner ready: true, restart count 1 I1125 16:05:07.160206 10133 runners.go:193] Container csi-resizer ready: true, restart count 1 I1125 16:05:07.160211 10133 runners.go:193] Container csi-snapshotter ready: true, restart count 1 I1125 16:05:07.160215 10133 runners.go:193] Container hostpath ready: true, restart count 1 I1125 16:05:07.160220 10133 runners.go:193] Container liveness-probe ready: true, restart count 1 I1125 16:05:07.160224 10133 runners.go:193] Container node-driver-registrar ready: true, restart count 1 I1125 16:05:07.160228 10133 runners.go:193] pod-subpath-test-preprovisionedpv-n5fx started at 2022-11-25 15:59:50 +0000 UTC (1+2 container statuses recorded) I1125 16:05:07.160234 10133 runners.go:193] Init container init-volume-preprovisionedpv-n5fx ready: true, restart count 3 I1125 16:05:07.160238 10133 runners.go:193] Container test-container-subpath-preprovisionedpv-n5fx ready: false, restart count 3 I1125 16:05:07.160242 10133 runners.go:193] Container test-container-volume-preprovisionedpv-n5fx ready: false, restart count 2 I1125 16:05:07.160247 10133 runners.go:193] pvc-volume-tester-68svt started at 2022-11-25 16:04:54 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160253 10133 runners.go:193] Container volume-tester ready: false, restart count 0 I1125 16:05:07.160257 10133 runners.go:193] hostexec-bootstrap-e2e-minion-group-9cl6-pghkd started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160263 10133 runners.go:193] Container agnhost-container ready: true, restart count 3 I1125 16:05:07.160267 10133 runners.go:193] csi-mockplugin-attacher-0 started at 2022-11-25 16:04:48 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160273 10133 runners.go:193] Container csi-attacher ready: true, restart count 0 I1125 16:05:07.160277 10133 runners.go:193] pod-262f38e7-b42e-4bd9-bd33-c3bf07a7d4c0 started at 2022-11-25 15:57:26 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160285 10133 runners.go:193] Container write-pod ready: false, restart count 0 I1125 16:05:07.160289 10133 runners.go:193] netserver-1 started at 2022-11-25 15:59:53 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160295 10133 runners.go:193] Container webserver ready: false, restart count 3 I1125 16:05:07.160300 10133 runners.go:193] coredns-6d97d5ddb-jlmlv started at 2022-11-25 15:55:57 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160305 10133 runners.go:193] Container coredns ready: false, restart count 5 I1125 16:05:07.160309 10133 runners.go:193] test-hostpath-type-vftrr started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160314 10133 runners.go:193] Container host-path-testing ready: false, restart count 0 I1125 16:05:07.160319 10133 runners.go:193] hostexec-bootstrap-e2e-minion-group-9cl6-zntjq started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160325 10133 runners.go:193] Container agnhost-container ready: true, restart count 1 I1125 16:05:07.160329 10133 runners.go:193] hostexec-bootstrap-e2e-minion-group-9cl6-vlw6f started at 2022-11-25 15:57:26 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160334 10133 runners.go:193] Container agnhost-container ready: true, restart count 3 I1125 16:05:07.160338 10133 runners.go:193] local-io-client started at 2022-11-25 15:57:38 +0000 UTC (1+1 container statuses recorded) I1125 16:05:07.160343 10133 runners.go:193] Init container local-io-init ready: true, restart count 0 I1125 16:05:07.160347 10133 runners.go:193] Container local-io-client ready: false, restart count 0 I1125 16:05:07.160351 10133 runners.go:193] pod-subpath-test-preprovisionedpv-5vln started at 2022-11-25 15:57:38 +0000 UTC (1+1 container statuses recorded) I1125 16:05:07.160357 10133 runners.go:193] Init container init-volume-preprovisionedpv-5vln ready: true, restart count 0 I1125 16:05:07.160361 10133 runners.go:193] Container test-container-subpath-preprovisionedpv-5vln ready: false, restart count 0 I1125 16:05:07.160365 10133 runners.go:193] kube-proxy-bootstrap-e2e-minion-group-9cl6 started at 2022-11-25 15:55:36 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160372 10133 runners.go:193] Container kube-proxy ready: false, restart count 5 I1125 16:05:07.160376 10133 runners.go:193] metadata-proxy-v0.1-lm6hb started at 2022-11-25 15:55:37 +0000 UTC (0+2 container statuses recorded) I1125 16:05:07.160382 10133 runners.go:193] Container metadata-proxy ready: true, restart count 0 I1125 16:05:07.160385 10133 runners.go:193] Container prometheus-to-sd-exporter ready: true, restart count 0 I1125 16:05:07.160389 10133 runners.go:193] pod-879bca5a-da87-481c-8825-3925192f7528 started at 2022-11-25 15:57:35 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160395 10133 runners.go:193] Container write-pod ready: false, restart count 0 I1125 16:05:07.160398 10133 runners.go:193] csi-mockplugin-0 started at 2022-11-25 16:04:48 +0000 UTC (0+3 container statuses recorded) I1125 16:05:07.160403 10133 runners.go:193] Container csi-provisioner ready: true, restart count 0 I1125 16:05:07.160407 10133 runners.go:193] Container driver-registrar ready: true, restart count 0 I1125 16:05:07.160411 10133 runners.go:193] Container mock ready: true, restart count 0 I1125 16:05:07.160415 10133 runners.go:193] affinity-lb-ltd4r started at 2022-11-25 16:04:57 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160420 10133 runners.go:193] Container affinity-lb ready: true, restart count 1 I1125 16:05:07.160424 10133 runners.go:193] var-expansion-74796279-6d49-4797-96b3-e1ccd39e019f started at 2022-11-25 15:57:36 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160429 10133 runners.go:193] Container dapi-container ready: false, restart count 0 I1125 16:05:07.160433 10133 runners.go:193] konnectivity-agent-gwjl2 started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160438 10133 runners.go:193] Container konnectivity-agent ready: false, restart count 3 I1125 16:05:07.160442 10133 runners.go:193] pod-1a10515f-adf0-4305-bed9-0275ef41a59c started at 2022-11-25 15:57:18 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160448 10133 runners.go:193] Container write-pod ready: false, restart count 0 I1125 16:05:07.160454 10133 runners.go:193] csi-mockplugin-0 started at 2022-11-25 15:59:36 +0000 UTC (0+4 container statuses recorded) I1125 16:05:07.160459 10133 runners.go:193] Container busybox ready: true, restart count 1 I1125 16:05:07.160462 10133 runners.go:193] Container csi-provisioner ready: true, restart count 2 I1125 16:05:07.160466 10133 runners.go:193] Container driver-registrar ready: true, restart count 1 I1125 16:05:07.160470 10133 runners.go:193] Container mock ready: true, restart count 1 I1125 16:05:07.160473 10133 runners.go:193] pod-subpath-test-inlinevolume-7w2q started at 2022-11-25 16:04:44 +0000 UTC (1+2 container statuses recorded) I1125 16:05:07.160478 10133 runners.go:193] Init container init-volume-inlinevolume-7w2q ready: true, restart count 1 I1125 16:05:07.160482 10133 runners.go:193] Container test-container-subpath-inlinevolume-7w2q ready: true, restart count 1 I1125 16:05:07.160486 10133 runners.go:193] Container test-container-volume-inlinevolume-7w2q ready: true, restart count 1 I1125 16:05:07.160490 10133 runners.go:193] hostexec-bootstrap-e2e-minion-group-9cl6-clmvd started at 2022-11-25 16:04:55 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160495 10133 runners.go:193] Container agnhost-container ready: true, restart count 0 I1125 16:05:07.160498 10133 runners.go:193] hostexec-bootstrap-e2e-minion-group-9cl6-l4gvp started at <nil> (0+0 container statuses recorded) I1125 16:05:07.160529 10133 runners.go:193] affinity-lb-esipp-transition-687ks started at 2022-11-25 16:04:45 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160535 10133 runners.go:193] Container affinity-lb-esipp-transition ready: false, restart count 1 I1125 16:05:07.160538 10133 runners.go:193] test-hostpath-type-fj9gg started at 2022-11-25 15:57:38 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160544 10133 runners.go:193] Container host-path-sh-testing ready: false, restart count 0 I1125 16:05:07.160547 10133 runners.go:193] external-local-nodeport-jpcl6 started at 2022-11-25 15:59:34 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160552 10133 runners.go:193] Container netexec ready: true, restart count 4 I1125 16:05:07.160556 10133 runners.go:193] hostexec-bootstrap-e2e-minion-group-9cl6-sztmx started at 2022-11-25 15:59:35 +0000 UTC (0+1 container statuses recorded) I1125 16:05:07.160561 10133 runners.go:193] Container agnhost-container ready: true, restart count 2 I1125 16:05:08.597966 10133 runners.go:193] Latency metrics for node bootstrap-e2e-minion-group-9cl6 I1125 16:05:08.710932 10133 runners.go:193] Running kubectl logs on non-ready containers in loadbalancers-9263 Nov 25 16:05:08.710: INFO: Unexpected error: failed to create replication controller with service in the namespace: loadbalancers-9263: <*errors.errorString | 0xc00169b620>: { s: "1 containers failed which is more than allowed 0", } Nov 25 16:05:08.711: FAIL: failed to create replication controller with service in the namespace: loadbalancers-9263: 1 containers failed which is more than allowed 0 Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x75cdc0f?, {0x801de88, 0xc002f351e0}, 0xc002172500, 0x0) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBService(...) test/e2e/network/service.go:3966 k8s.io/kubernetes/test/e2e/network.glob..func19.10() test/e2e/network/loadbalancer.go:798 +0xf0 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 16:05:08.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 25 16:05:08.798: INFO: Output of kubectl describe svc: Nov 25 16:05:08.798: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.125.133 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-9263 describe svc --namespace=loadbalancers-9263' Nov 25 16:05:09.583: INFO: stderr: "" Nov 25 16:05:09.583: INFO: stdout: "Name: affinity-lb\nNamespace: loadbalancers-9263\nLabels: <none>\nAnnotations: <none>\nSelector: name=affinity-lb\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.161.215\nIPs: 10.0.161.215\nPort: <unset> 80/TCP\nTargetPort: 9376/TCP\nNodePort: <unset> 32694/TCP\nEndpoints: 10.64.0.74:9376,10.64.1.82:9376,10.64.3.61:9376\nSession Affinity: ClientIP\nExternal Traffic Policy: Cluster\nEvents: <none>\n" Nov 25 16:05:09.583: INFO: Name: affinity-lb Namespace: loadbalancers-9263 Labels: <none> Annotations: <none> Selector: name=affinity-lb Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.161.215 IPs: 10.0.161.215 Port: <unset> 80/TCP TargetPort: 9376/TCP NodePort: <unset> 32694/TCP Endpoints: 10.64.0.74:9376,10.64.1.82:9376,10.64.3.61:9376 Session Affinity: ClientIP External Traffic Policy: Cluster Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:05:09.583 STEP: Collecting events from namespace "loadbalancers-9263". 11/25/22 16:05:09.584 STEP: Found 18 events. 11/25/22 16:05:09.789 Nov 25 16:05:09.789: INFO: At 2022-11-25 16:04:57 +0000 UTC - event for affinity-lb: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-bfwsf Nov 25 16:05:09.789: INFO: At 2022-11-25 16:04:57 +0000 UTC - event for affinity-lb: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-ltd4r Nov 25 16:05:09.789: INFO: At 2022-11-25 16:04:57 +0000 UTC - event for affinity-lb: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-gzd69 Nov 25 16:05:09.789: INFO: At 2022-11-25 16:04:57 +0000 UTC - event for affinity-lb-bfwsf: {default-scheduler } Scheduled: Successfully assigned loadbalancers-9263/affinity-lb-bfwsf to bootstrap-e2e-minion-group-6gq3 Nov 25 16:05:09.789: INFO: At 2022-11-25 16:04:57 +0000 UTC - event for affinity-lb-gzd69: {default-scheduler } Scheduled: Successfully assigned loadbalancers-9263/affinity-lb-gzd69 to bootstrap-e2e-minion-group-sp52 Nov 25 16:05:09.789: INFO: At 2022-11-25 16:04:57 +0000 UTC - event for affinity-lb-ltd4r: {default-scheduler } Scheduled: Successfully assigned loadbalancers-9263/affinity-lb-ltd4r to bootstrap-e2e-minion-group-9cl6 Nov 25 16:05:09.789: INFO: At 2022-11-25 16:04:59 +0000 UTC - event for affinity-lb-bfwsf: {kubelet bootstrap-e2e-minion-group-6gq3} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 16:05:09.789: INFO: At 2022-11-25 16:04:59 +0000 UTC - event for affinity-lb-bfwsf: {kubelet bootstrap-e2e-minion-group-6gq3} Created: Created container affinity-lb Nov 25 16:05:09.789: INFO: At 2022-11-25 16:04:59 +0000 UTC - event for affinity-lb-bfwsf: {kubelet bootstrap-e2e-minion-group-6gq3} Started: Started container affinity-lb Nov 25 16:05:09.789: INFO: At 2022-11-25 16:05:00 +0000 UTC - event for affinity-lb-gzd69: {kubelet bootstrap-e2e-minion-group-sp52} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 16:05:09.789: INFO: At 2022-11-25 16:05:00 +0000 UTC - event for affinity-lb-gzd69: {kubelet bootstrap-e2e-minion-group-sp52} Created: Created container affinity-lb Nov 25 16:05:09.789: INFO: At 2022-11-25 16:05:00 +0000 UTC - event for affinity-lb-gzd69: {kubelet bootstrap-e2e-minion-group-sp52} Started: Started container affinity-lb Nov 25 16:05:09.789: INFO: At 2022-11-25 16:05:00 +0000 UTC - event for affinity-lb-ltd4r: {kubelet bootstrap-e2e-minion-group-9cl6} Created: Created container affinity-lb Nov 25 16:05:09.789: INFO: At 2022-11-25 16:05:00 +0000 UTC - event for affinity-lb-ltd4r: {kubelet bootstrap-e2e-minion-group-9cl6} Started: Started container affinity-lb Nov 25 16:05:09.789: INFO: At 2022-11-25 16:05:00 +0000 UTC - event for affinity-lb-ltd4r: {kubelet bootstrap-e2e-minion-group-9cl6} Killing: Stopping container affinity-lb Nov 25 16:05:09.789: INFO: At 2022-11-25 16:05:00 +0000 UTC - event for affinity-lb-ltd4r: {kubelet bootstrap-e2e-minion-group-9cl6} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 25 16:05:09.789: INFO: At 2022-11-25 16:05:01 +0000 UTC - event for affinity-lb-gzd69: {kubelet bootstrap-e2e-minion-group-sp52} Killing: Stopping container affinity-lb Nov 25 16:05:09.789: INFO: At 2022-11-25 16:05:05 +0000 UTC - event for affinity-lb-gzd69: {kubelet bootstrap-e2e-minion-group-sp52} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 16:05:09.895: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 16:05:09.895: INFO: affinity-lb-bfwsf bootstrap-e2e-minion-group-6gq3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:57 +0000 UTC }] Nov 25 16:05:09.895: INFO: affinity-lb-gzd69 bootstrap-e2e-minion-group-sp52 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:04 +0000 UTC ContainersNotReady containers with unready status: [affinity-lb]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:04 +0000 UTC ContainersNotReady containers with unready status: [affinity-lb]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:57 +0000 UTC }] Nov 25 16:05:09.895: INFO: affinity-lb-ltd4r bootstrap-e2e-minion-group-9cl6 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:05:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:57 +0000 UTC }] Nov 25 16:05:09.895: INFO: Nov 25 16:05:10.882: INFO: Logging node info for node bootstrap-e2e-master Nov 25 16:05:10.987: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 9cdf5595-019f-4ae3-b78d-0ecc5e3bede9 3445 0 2022-11-25 15:55:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 16:02:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.197.125.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ac2cf1840f1e13636aadd5beda3bd372,SystemUUID:ac2cf184-0f1e-1363-6aad-d5beda3bd372,BootID:561947ad-30a0-426d-bdea-6c654b08a7a1,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 16:05:10.988: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 16:05:11.096: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 16:05:11.277: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:11.277: INFO: Container kube-apiserver ready: true, restart count 2 Nov 25 16:05:11.277: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:11.277: INFO: Container etcd-container ready: true, restart count 2 Nov 25 16:05:11.277: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:11.277: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 25 16:05:11.277: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-25 15:55:12 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:11.277: INFO: Container l7-lb-controller ready: true, restart count 5 Nov 25 16:05:11.277: INFO: metadata-proxy-v0.1-7q9zt started at 2022-11-25 15:55:39 +0000 UTC (0+2 container statuses recorded) Nov 25 16:05:11.277: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 16:05:11.277: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 16:05:11.277: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:11.277: INFO: Container kube-controller-manager ready: true, restart count 6 Nov 25 16:05:11.277: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:11.277: INFO: Container kube-scheduler ready: true, restart count 3 Nov 25 16:05:11.277: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-25 15:54:56 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:11.277: INFO: Container etcd-container ready: true, restart count 2 Nov 25 16:05:11.277: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-25 15:55:12 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:11.277: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 25 16:05:11.718: INFO: Latency metrics for node bootstrap-e2e-master Nov 25 16:05:11.718: INFO: Logging node info for node bootstrap-e2e-minion-group-6gq3 Nov 25 16:05:11.795: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6gq3 d9dd389c-0f83-4f5d-89ae-55a80abf1a2f 4832 0 2022-11-25 15:55:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6gq3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6gq3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-9498":"bootstrap-e2e-minion-group-6gq3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 16:00:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:05:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2022-11-25 16:05:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-6gq3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:05:09 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:05:09 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:05:09 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:05:09 +0000 UTC,LastTransitionTime:2022-11-25 15:55:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.145.38.169,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a65d19069fa2a7e527b61eb4bd24dd95,SystemUUID:a65d1906-9fa2-a7e5-27b6-1eb4bd24dd95,BootID:f0b831b9-bee5-4ef5-bc7f-65152df7ae5a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-1436^eeb7b306-6cda-11ed-bacc-ee4d4a7a69be,DevicePath:,},},Config:nil,},} Nov 25 16:05:11.796: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6gq3 Nov 25 16:05:11.848: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6gq3 Nov 25 16:05:12.067: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6gq3: error trying to reach service: EOF Nov 25 16:05:12.067: INFO: Logging node info for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:05:12.138: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-9cl6 074fe96a-325f-4d5f-83a2-c840a04a6f6e 4781 0 2022-11-25 15:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-9cl6 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-9cl6 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8031":"bootstrap-e2e-minion-group-9cl6","csi-mock-csi-mock-volumes-5257":"bootstrap-e2e-minion-group-9cl6"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 16:00:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 16:04:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 16:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-9cl6,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:05:08 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:05:08 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:05:08 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:05:08 +0000 UTC,LastTransitionTime:2022-11-25 15:55:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.203.132.179,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8858ca8f7f864c182ba49f423846650c,SystemUUID:8858ca8f-7f86-4c18-2ba4-9f423846650c,BootID:fbd96363-13a6-49de-a1fa-0e73a4570da5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 16:05:12.139: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:05:12.203: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-9cl6 Nov 25 16:05:12.572: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-pghkd started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container agnhost-container ready: true, restart count 3 Nov 25 16:05:12.572: INFO: csi-mockplugin-attacher-0 started at 2022-11-25 16:04:48 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container csi-attacher ready: true, restart count 0 Nov 25 16:05:12.572: INFO: pod-262f38e7-b42e-4bd9-bd33-c3bf07a7d4c0 started at 2022-11-25 15:57:26 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:05:12.572: INFO: test-hostpath-type-xcppn started at <nil> (0+0 container statuses recorded) Nov 25 16:05:12.572: INFO: netserver-1 started at 2022-11-25 15:59:53 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container webserver ready: true, restart count 3 Nov 25 16:05:12.572: INFO: coredns-6d97d5ddb-jlmlv started at 2022-11-25 15:55:57 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container coredns ready: false, restart count 5 Nov 25 16:05:12.572: INFO: test-hostpath-type-vftrr started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 16:05:12.572: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-zntjq started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 16:05:12.572: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-vlw6f started at 2022-11-25 15:57:26 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container agnhost-container ready: true, restart count 3 Nov 25 16:05:12.572: INFO: local-io-client started at 2022-11-25 15:57:38 +0000 UTC (1+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Init container local-io-init ready: true, restart count 0 Nov 25 16:05:12.572: INFO: Container local-io-client ready: false, restart count 0 Nov 25 16:05:12.572: INFO: pod-subpath-test-preprovisionedpv-5vln started at 2022-11-25 15:57:38 +0000 UTC (1+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Init container init-volume-preprovisionedpv-5vln ready: true, restart count 0 Nov 25 16:05:12.572: INFO: Container test-container-subpath-preprovisionedpv-5vln ready: false, restart count 0 Nov 25 16:05:12.572: INFO: kube-proxy-bootstrap-e2e-minion-group-9cl6 started at 2022-11-25 15:55:36 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container kube-proxy ready: false, restart count 5 Nov 25 16:05:12.572: INFO: metadata-proxy-v0.1-lm6hb started at 2022-11-25 15:55:37 +0000 UTC (0+2 container statuses recorded) Nov 25 16:05:12.572: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 16:05:12.572: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 16:05:12.572: INFO: pod-879bca5a-da87-481c-8825-3925192f7528 started at 2022-11-25 15:57:35 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:05:12.572: INFO: csi-mockplugin-0 started at 2022-11-25 16:04:48 +0000 UTC (0+3 container statuses recorded) Nov 25 16:05:12.572: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 16:05:12.572: INFO: Container driver-registrar ready: true, restart count 0 Nov 25 16:05:12.572: INFO: Container mock ready: true, restart count 0 Nov 25 16:05:12.572: INFO: affinity-lb-ltd4r started at 2022-11-25 16:04:57 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container affinity-lb ready: false, restart count 1 Nov 25 16:05:12.572: INFO: var-expansion-74796279-6d49-4797-96b3-e1ccd39e019f started at 2022-11-25 15:57:36 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container dapi-container ready: false, restart count 0 Nov 25 16:05:12.572: INFO: konnectivity-agent-gwjl2 started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container konnectivity-agent ready: true, restart count 4 Nov 25 16:05:12.572: INFO: pod-1a10515f-adf0-4305-bed9-0275ef41a59c started at 2022-11-25 15:57:18 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:05:12.572: INFO: csi-mockplugin-0 started at 2022-11-25 15:59:36 +0000 UTC (0+4 container statuses recorded) Nov 25 16:05:12.572: INFO: Container busybox ready: true, restart count 1 Nov 25 16:05:12.572: INFO: Container csi-provisioner ready: false, restart count 2 Nov 25 16:05:12.572: INFO: Container driver-registrar ready: true, restart count 1 Nov 25 16:05:12.572: INFO: Container mock ready: true, restart count 1 Nov 25 16:05:12.572: INFO: affinity-lb-esipp-transition-687ks started at 2022-11-25 16:04:45 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container affinity-lb-esipp-transition ready: false, restart count 1 Nov 25 16:05:12.572: INFO: test-hostpath-type-fj9gg started at 2022-11-25 15:57:38 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container host-path-sh-testing ready: false, restart count 0 Nov 25 16:05:12.572: INFO: external-local-nodeport-jpcl6 started at 2022-11-25 15:59:34 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container netexec ready: true, restart count 4 Nov 25 16:05:12.572: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-sztmx started at 2022-11-25 15:59:35 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 16:05:12.572: INFO: pod-subpath-test-preprovisionedpv-n5fx started at 2022-11-25 15:59:50 +0000 UTC (1+2 container statuses recorded) Nov 25 16:05:12.572: INFO: Init container init-volume-preprovisionedpv-n5fx ready: true, restart count 3 Nov 25 16:05:12.572: INFO: Container test-container-subpath-preprovisionedpv-n5fx ready: false, restart count 3 Nov 25 16:05:12.572: INFO: Container test-container-volume-preprovisionedpv-n5fx ready: false, restart count 2 Nov 25 16:05:12.572: INFO: hostexec-bootstrap-e2e-minion-group-9cl6-kp5tg started at 2022-11-25 15:57:16 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 16:05:12.572: INFO: test-hostpath-type-6ljpr started at 2022-11-25 15:57:35 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:12.572: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 16:05:12.572: INFO: csi-hostpathplugin-0 started at 2022-11-25 15:59:36 +0000 UTC (0+7 container statuses recorded) Nov 25 16:05:12.572: INFO: Container csi-attacher ready: true, restart count 1 Nov 25 16:05:12.572: INFO: Container csi-provisioner ready: true, restart count 1 Nov 25 16:05:12.572: INFO: Container csi-resizer ready: true, restart count 1 Nov 25 16:05:12.572: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 25 16:05:12.572: INFO: Container hostpath ready: true, restart count 1 Nov 25 16:05:12.572: INFO: Container liveness-probe ready: true, restart count 1 Nov 25 16:05:12.572: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 25 16:05:14.905: INFO: Latency metrics for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:05:14.905: INFO: Logging node info for node bootstrap-e2e-minion-group-sp52 Nov 25 16:05:14.981: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-sp52 50f2d6f8-49b3-493a-a11a-263fafdd25f0 4460 0 2022-11-25 15:55:42 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-sp52 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-sp52 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 15:59:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 16:00:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:04:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-sp52,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.33.187,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4377d7377743ba64e8758a2f00cb7bc9,SystemUUID:4377d737-7743-ba64-e875-8a2f00cb7bc9,BootID:601334d8-63bd-4289-88cf-b3039f865736,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e,DevicePath:,},},Config:nil,},} Nov 25 16:05:14.982: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-sp52 Nov 25 16:05:15.063: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-sp52 Nov 25 16:05:15.235: INFO: external-provisioner-grqqx started at 2022-11-25 16:04:56 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container nfs-provisioner ready: false, restart count 1 Nov 25 16:05:15.235: INFO: affinity-lb-gzd69 started at 2022-11-25 16:04:57 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container affinity-lb ready: true, restart count 1 Nov 25 16:05:15.235: INFO: csi-mockplugin-0 started at 2022-11-25 16:04:58 +0000 UTC (0+4 container statuses recorded) Nov 25 16:05:15.235: INFO: Container busybox ready: true, restart count 0 Nov 25 16:05:15.235: INFO: Container csi-provisioner ready: true, restart count 0 Nov 25 16:05:15.235: INFO: Container driver-registrar ready: true, restart count 0 Nov 25 16:05:15.235: INFO: Container mock ready: true, restart count 0 Nov 25 16:05:15.235: INFO: pod-subpath-test-preprovisionedpv-4xmm started at 2022-11-25 15:57:38 +0000 UTC (1+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Init container init-volume-preprovisionedpv-4xmm ready: true, restart count 0 Nov 25 16:05:15.235: INFO: Container test-container-subpath-preprovisionedpv-4xmm ready: false, restart count 0 Nov 25 16:05:15.235: INFO: hostexec-bootstrap-e2e-minion-group-sp52-6q2d2 started at 2022-11-25 16:04:56 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container agnhost-container ready: false, restart count 1 Nov 25 16:05:15.235: INFO: pod-back-off-image started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container back-off ready: false, restart count 9 Nov 25 16:05:15.235: INFO: test-hostpath-type-7x7jc started at 2022-11-25 16:04:54 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container host-path-sh-testing ready: true, restart count 0 Nov 25 16:05:15.235: INFO: metrics-server-v0.5.2-867b8754b9-xks4c started at 2022-11-25 15:56:07 +0000 UTC (0+2 container statuses recorded) Nov 25 16:05:15.235: INFO: Container metrics-server ready: false, restart count 5 Nov 25 16:05:15.235: INFO: Container metrics-server-nanny ready: false, restart count 6 Nov 25 16:05:15.235: INFO: hostexec-bootstrap-e2e-minion-group-sp52-qprs8 started at 2022-11-25 15:57:14 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container agnhost-container ready: false, restart count 4 Nov 25 16:05:15.235: INFO: pod-1aeaf794-dfc5-4bf5-a5d6-a74390afdcef started at 2022-11-25 15:59:34 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:05:15.235: INFO: hostexec-bootstrap-e2e-minion-group-sp52-t7nms started at 2022-11-25 16:04:45 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 16:05:15.235: INFO: netserver-2 started at 2022-11-25 15:59:53 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container webserver ready: true, restart count 3 Nov 25 16:05:15.235: INFO: konnectivity-agent-qc7wc started at 2022-11-25 15:55:55 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container konnectivity-agent ready: false, restart count 5 Nov 25 16:05:15.235: INFO: hostexec-bootstrap-e2e-minion-group-sp52-dthgj started at 2022-11-25 15:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container agnhost-container ready: true, restart count 3 Nov 25 16:05:15.235: INFO: csi-hostpathplugin-0 started at 2022-11-25 15:57:17 +0000 UTC (0+7 container statuses recorded) Nov 25 16:05:15.235: INFO: Container csi-attacher ready: false, restart count 4 Nov 25 16:05:15.235: INFO: Container csi-provisioner ready: false, restart count 4 Nov 25 16:05:15.235: INFO: Container csi-resizer ready: false, restart count 4 Nov 25 16:05:15.235: INFO: Container csi-snapshotter ready: false, restart count 4 Nov 25 16:05:15.235: INFO: Container hostpath ready: false, restart count 4 Nov 25 16:05:15.235: INFO: Container liveness-probe ready: false, restart count 4 Nov 25 16:05:15.235: INFO: Container node-driver-registrar ready: false, restart count 4 Nov 25 16:05:15.235: INFO: ss-0 started at 2022-11-25 16:04:52 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container webserver ready: false, restart count 1 Nov 25 16:05:15.235: INFO: affinity-lb-esipp-transition-rvb7l started at 2022-11-25 16:04:45 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container affinity-lb-esipp-transition ready: true, restart count 0 Nov 25 16:05:15.235: INFO: kube-proxy-bootstrap-e2e-minion-group-sp52 started at 2022-11-25 15:55:42 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container kube-proxy ready: true, restart count 5 Nov 25 16:05:15.235: INFO: pod-subpath-test-preprovisionedpv-6n9v started at 2022-11-25 15:57:38 +0000 UTC (1+2 container statuses recorded) Nov 25 16:05:15.235: INFO: Init container init-volume-preprovisionedpv-6n9v ready: true, restart count 0 Nov 25 16:05:15.235: INFO: Container test-container-subpath-preprovisionedpv-6n9v ready: true, restart count 0 Nov 25 16:05:15.235: INFO: Container test-container-volume-preprovisionedpv-6n9v ready: true, restart count 0 Nov 25 16:05:15.235: INFO: test-hostpath-type-tcwxq started at 2022-11-25 15:57:38 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 16:05:15.235: INFO: var-expansion-07e93d59-a015-44f3-9fba-6604e8291733 started at 2022-11-25 16:04:45 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container dapi-container ready: true, restart count 0 Nov 25 16:05:15.235: INFO: test-hostpath-type-vgxcp started at 2022-11-25 15:57:29 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container host-path-testing ready: false, restart count 0 Nov 25 16:05:15.235: INFO: csi-hostpathplugin-0 started at 2022-11-25 15:57:38 +0000 UTC (0+7 container statuses recorded) Nov 25 16:05:15.235: INFO: Container csi-attacher ready: false, restart count 3 Nov 25 16:05:15.235: INFO: Container csi-provisioner ready: false, restart count 3 Nov 25 16:05:15.235: INFO: Container csi-resizer ready: false, restart count 3 Nov 25 16:05:15.235: INFO: Container csi-snapshotter ready: false, restart count 3 Nov 25 16:05:15.235: INFO: Container hostpath ready: false, restart count 3 Nov 25 16:05:15.235: INFO: Container liveness-probe ready: false, restart count 3 Nov 25 16:05:15.235: INFO: Container node-driver-registrar ready: false, restart count 3 Nov 25 16:05:15.235: INFO: pod-e1229de5-2e40-4cb2-b4e6-1393252d48bc started at 2022-11-25 16:00:02 +0000 UTC (0+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:05:15.235: INFO: metadata-proxy-v0.1-zsm52 started at 2022-11-25 15:55:43 +0000 UTC (0+2 container statuses recorded) Nov 25 16:05:15.235: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 16:05:15.235: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 16:05:15.235: INFO: pod-subpath-test-dynamicpv-mdz4 started at 2022-11-25 15:57:27 +0000 UTC (1+1 container statuses recorded) Nov 25 16:05:15.235: INFO: Init container init-volume-dynamicpv-mdz4 ready: true, restart count 0 Nov 25 16:05:15.235: INFO: Container test-container-subpath-dynamicpv-mdz4 ready: false, restart count 0 Nov 25 16:05:16.126: INFO: Latency metrics for node bootstrap-e2e-minion-group-sp52 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-9263" for this suite. 11/25/22 16:05:16.126
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sonly\sallow\saccess\sfrom\sservice\sloadbalancer\ssource\sranges\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0012c5950) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:14:58.744 Nov 25 16:14:58.744: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/25/22 16:14:58.747 Nov 25 16:14:58.787: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:00.827: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:02.827: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:04.827: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:06.827: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:08.826: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:10.827: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:12.827: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:14.827: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:16.827: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:18.827: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:20.827: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:22.826: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:24.827: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:26.827: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:28.826: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:28.866: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:28.866: INFO: Unexpected error: <*errors.errorString | 0xc00017da30>: { s: "timed out waiting for the condition", } Nov 25 16:15:28.866: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0012c5950) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 25 16:15:28.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:15:28.906 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\supdate\snodePort\:\shttp\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0013105a0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-network] Networking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:15:45.697 Nov 25 16:15:45.697: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename nettest 11/25/22 16:15:45.699 Nov 25 16:15:45.739: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:47.779: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:49.779: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:51.779: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:53.779: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:55.780: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:57.779: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:59.779: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:01.779: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:03.779: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:05.779: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:07.779: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:09.779: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:11.779: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:13.778: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:15.779: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:15.819: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:16:15.819: INFO: Unexpected error: <*errors.errorString | 0xc00017da10>: { s: "timed out waiting for the condition", } Nov 25 16:16:15.819: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0013105a0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 Nov 25 16:16:15.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:16:15.859 [DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\supdate\snodePort\:\sudp\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000e525a0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-network] Networking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:01:11.296 Nov 25 16:01:11.296: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename nettest 11/25/22 16:01:11.298 Nov 25 16:01:11.337: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:13.377: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:15.378: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:17.377: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:19.376: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:21.378: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:23.376: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:25.377: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:27.377: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:29.376: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:31.377: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:33.380: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:35.376: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:37.377: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:39.377: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:41.376: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:41.415: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:01:41.415: INFO: Unexpected error: <*errors.errorString | 0xc000221c50>: { s: "timed out waiting for the condition", } Nov 25 16:01:41.416: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000e525a0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 Nov 25 16:01:41.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:01:41.455 [DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sServices\sGCE\s\[Slow\]\sshould\sbe\sable\sto\screate\sand\stear\sdown\sa\sstandard\-tier\sload\sbalancer\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00132e0f0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func21.2() test/e2e/network/network_tiers.go:57 +0x133from junit_01.xml
[BeforeEach] [sig-network] Services GCE [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:15:22.628 Nov 25 16:15:22.628: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename services 11/25/22 16:15:22.63 Nov 25 16:15:22.670: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:24.710: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:26.710: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:28.711: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:30.710: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:32.710: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:34.711: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:36.710: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:38.710: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:40.710: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:42.711: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:44.710: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:46.710: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:48.710: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:50.710: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:52.710: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:52.750: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:15:52.750: INFO: Unexpected error: <*errors.errorString | 0xc000115cd0>: { s: "timed out waiting for the condition", } Nov 25 16:15:52.750: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00132e0f0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] Services GCE [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 16:15:52.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] Services GCE [Slow] test/e2e/network/network_tiers.go:55 [DeferCleanup (Each)] [sig-network] Services GCE [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:15:52.791 [DeferCleanup (Each)] [sig-network] Services GCE [Slow] tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sPods\sshould\scap\sback\-off\sat\sMaxContainerBackOff\s\[Slow\]\[NodeConformance\]$'
test/e2e/common/node/pods.go:129 k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:129 +0x225 k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:761 +0x4c7from junit_01.xml
[BeforeEach] [sig-node] Pods set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:05:25.077 Nov 25 16:05:25.077: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename pods 11/25/22 16:05:25.082 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 16:05:25.285 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 16:05:25.381 [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:194 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] test/e2e/common/node/pods.go:717 Nov 25 16:05:25.686: INFO: Waiting up to 5m0s for pod "back-off-cap" in namespace "pods-2946" to be "running and ready" Nov 25 16:05:25.788: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 101.189572ms Nov 25 16:05:25.788: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:27.843: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156535854s Nov 25 16:05:27.843: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:29.852: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165655895s Nov 25 16:05:29.852: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:31.850: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163104047s Nov 25 16:05:31.850: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:33.841: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 8.154028839s Nov 25 16:05:33.841: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:35.893: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206676288s Nov 25 16:05:35.893: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:37.850: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 12.163093769s Nov 25 16:05:37.850: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:39.840: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 14.153743912s Nov 25 16:05:39.840: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:41.887: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 16.200330173s Nov 25 16:05:41.887: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:43.844: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 18.15706143s Nov 25 16:05:43.844: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:45.858: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 20.171679345s Nov 25 16:05:45.858: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:47.838: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 22.151001817s Nov 25 16:05:47.838: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:49.860: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 24.173142787s Nov 25 16:05:49.860: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:51.870: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 26.18378819s Nov 25 16:05:51.870: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:53.839: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 28.152565845s Nov 25 16:05:53.839: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:55.853: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 30.166056187s Nov 25 16:05:55.853: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:57.836: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 32.149300769s Nov 25 16:05:57.836: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:05:59.836: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 34.14933775s Nov 25 16:05:59.836: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:01.922: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 36.235429236s Nov 25 16:06:01.922: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:03.861: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 38.174400725s Nov 25 16:06:03.861: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:05.890: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 40.203341087s Nov 25 16:06:05.890: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:07.870: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 42.183411004s Nov 25 16:06:07.870: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:09.875: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 44.188042572s Nov 25 16:06:09.875: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:11.884: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 46.197759463s Nov 25 16:06:11.884: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:13.857: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 48.170831557s Nov 25 16:06:13.857: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:15.887: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 50.200283633s Nov 25 16:06:15.887: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:17.841: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 52.154228676s Nov 25 16:06:17.841: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:19.846: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 54.159819518s Nov 25 16:06:19.846: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:21.871: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 56.184206781s Nov 25 16:06:21.871: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:23.849: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 58.16204405s Nov 25 16:06:23.849: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:25.857: INFO: Pod "back-off-cap": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.170541241s Nov 25 16:06:25.857: INFO: The phase of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 25 16:06:27.884: INFO: Pod "back-off-cap": Phase="Running", Reason="", readiness=true. Elapsed: 1m2.197911499s Nov 25 16:06:27.884: INFO: The phase of Pod back-off-cap is Running (Ready = true) Nov 25 16:06:27.884: INFO: Pod "back-off-cap" satisfied condition "running and ready" ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 5m0.402s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 5m0s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 5 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 5m20.404s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 5m20.002s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 5 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 5m40.406s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 5m40.005s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 5 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 6m0.409s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 6m0.008s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 6 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 6m20.411s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 6m20.009s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 6 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 6m40.413s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 6m40.011s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 6 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 7m0.415s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 7m0.013s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 7 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 7m20.417s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 7m20.016s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 7 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 7m40.419s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 7m40.018s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 7 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 8m0.422s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 8m0.021s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 8 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 8m20.425s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 8m20.024s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 8 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 8m40.427s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 8m40.026s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 8 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 9m0.429s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 9m0.028s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 9 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 9m20.431s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 9m20.029s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 9 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 9m40.434s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 9m40.032s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 9 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 10m0.436s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 10m0.034s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 10 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 10m20.438s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 10m20.036s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 10 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 10m40.44s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 10m40.039s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 10 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 11m0.442s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 11m0.041s) test/e2e/common/node/pods.go:717 Spec Goroutine goroutine 1237 [sleep, 11 minutes] time.Sleep(0x8bb2c97000) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:737 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ STEP: getting restart delay when capped 11/25/22 16:16:27.96 ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 11m20.444s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 11m20.043s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 17.56s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 11m40.446s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 11m40.045s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 37.563s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc003914000, 0xc000d78d00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002f5cb80, 0xc000d78d00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc0037d4000?}, 0xc000d78d00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc0037d4000, 0xc000d78d00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc003bcf170?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc005105260, 0xc000d78c00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc004dcd000, 0xc000d78b00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc000d78b00, {0x7fad100, 0xc004dcd000}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc005105290, 0xc000d78b00, {0x7fb250323108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc005105290, 0xc000d78b00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc000d78900, {0x7fe0bc8, 0xc0000820e0}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc000d78900, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc004d6c240, {0x7fe0bc8, 0xc0000820e0}, {0x75d2035, 0xc}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:128 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 12m0.449s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 12m0.048s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 57.565s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 12m20.451s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 12m20.049s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 1m17.567s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:17:57.880: INFO: getRestartDelay: restartCount = 7, finishedAt=2022-11-25 16:12:49 +0000 UTC restartedAt=2022-11-25 16:17:56 +0000 UTC (5m7s) ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 12m40.453s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 12m40.051s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 1m37.569s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc003914000, 0xc000c86300) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002f5cb80, 0xc000c86300, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc0037d4000?}, 0xc000c86300?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc0037d4000, 0xc000c86300) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc001eb1440?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc005105260, 0xc000c86200) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc004dcd000, 0xc000c86100) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc000c86100, {0x7fad100, 0xc004dcd000}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc005105290, 0xc000c86100, {0x7fb2503235b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc005105290, 0xc000c86100) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0014f5f00, {0x7fe0bc8, 0xc0000820e0}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0014f5f00, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc004d6c240, {0x7fe0bc8, 0xc0000820e0}, {0x75d2035, 0xc}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:128 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 13m0.455s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 13m0.053s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 1m57.571s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc003914000, 0xc000c86300) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002f5cb80, 0xc000c86300, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc0037d4000?}, 0xc000c86300?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc0037d4000, 0xc000c86300) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc001eb1440?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc005105260, 0xc000c86200) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc004dcd000, 0xc000c86100) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc000c86100, {0x7fad100, 0xc004dcd000}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc005105290, 0xc000c86100, {0x7fb2503235b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc005105290, 0xc000c86100) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0014f5f00, {0x7fe0bc8, 0xc0000820e0}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0014f5f00, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc004d6c240, {0x7fe0bc8, 0xc0000820e0}, {0x75d2035, 0xc}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:128 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 13m20.457s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 13m20.056s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 2m17.574s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 13m40.46s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 13m40.058s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 2m37.576s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 14m0.462s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 14m0.06s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 2m57.578s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 14m20.464s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 14m20.062s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 3m17.58s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 14m40.467s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 14m40.065s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 3m37.583s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 15m0.469s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 15m0.068s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 3m57.585s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 15m20.471s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 15m20.069s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 4m17.587s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 15m40.473s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 15m40.071s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 4m37.589s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 16m0.475s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 16m0.073s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 4m57.591s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 16m20.477s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 16m20.075s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 5m17.593s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 16m40.479s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 16m40.078s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 5m37.595s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 17m0.481s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 17m0.079s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 5m57.597s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 17m20.482s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 17m20.081s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 6m17.598s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 17m40.484s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 17m40.082s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 6m37.6s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:23:13.397: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-11-25 16:18:01 +0000 UTC restartedAt=2022-11-25 16:23:12 +0000 UTC (5m11s) ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 18m0.486s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 18m0.085s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 6m57.602s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 18m20.489s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 18m20.087s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 7m17.605s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 18m40.491s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 18m40.089s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 7m37.607s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 19m0.493s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 19m0.091s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 7m57.609s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc003914000, 0xc000c75900) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002f5cb80, 0xc000c75900, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc0037d4000?}, 0xc000c75900?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc0037d4000, 0xc000c75900) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc002b4b800?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc005105260, 0xc000c75800) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc004dcd000, 0xc000c75700) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc000c75700, {0x7fad100, 0xc004dcd000}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc005105290, 0xc000c75700, {0x7fb2503235b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc005105290, 0xc000c75700) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc000c75500, {0x7fe0bc8, 0xc0000820e0}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc000c75500, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc004d6c240, {0x7fe0bc8, 0xc0000820e0}, {0x75d2035, 0xc}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:128 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 19m20.495s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 19m20.093s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 8m17.611s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc003914000, 0xc000c75900) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002f5cb80, 0xc000c75900, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc0037d4000?}, 0xc000c75900?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc0037d4000, 0xc000c75900) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc002b4b800?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc005105260, 0xc000c75800) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc004dcd000, 0xc000c75700) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc000c75700, {0x7fad100, 0xc004dcd000}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc005105290, 0xc000c75700, {0x7fb2503235b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc005105290, 0xc000c75700) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc000c75500, {0x7fe0bc8, 0xc0000820e0}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc000c75500, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc004d6c240, {0x7fe0bc8, 0xc0000820e0}, {0x75d2035, 0xc}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:128 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 19m40.497s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 19m40.096s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 8m37.613s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 20m0.5s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 20m0.098s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 8m57.616s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 20m20.502s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 20m20.1s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 9m17.618s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 20m40.504s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 20m40.102s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 9m37.62s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 21m0.506s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 21m0.104s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 9m57.622s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 21m20.508s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 21m20.106s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 10m17.624s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 21m40.51s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 21m40.108s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 10m37.626s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 22m0.512s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 22m0.11s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 10m57.628s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 22m20.514s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 22m20.112s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 11m17.63s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 22m40.516s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 22m40.115s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 11m37.632s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 23m0.519s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 23m0.117s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay when capped (Step Runtime: 11m57.635s) test/e2e/common/node/pods.go:740 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:746 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:28:33.006: INFO: getRestartDelay: restartCount = 9, finishedAt=2022-11-25 16:23:17 +0000 UTC restartedAt=2022-11-25 16:28:31 +0000 UTC (5m14s) STEP: getting restart delay after a capped delay 11/25/22 16:28:33.006 ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 23m20.521s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 23m20.119s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay after a capped delay (Step Runtime: 12.592s) test/e2e/common/node/pods.go:760 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:761 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 23m40.522s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 23m40.121s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay after a capped delay (Step Runtime: 32.593s) test/e2e/common/node/pods.go:760 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:761 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 24m0.525s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 24m0.123s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay after a capped delay (Step Runtime: 52.596s) test/e2e/common/node/pods.go:760 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:761 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 24m20.526s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 24m20.125s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay after a capped delay (Step Runtime: 1m12.597s) test/e2e/common/node/pods.go:760 Spec Goroutine goroutine 1237 [sleep] time.Sleep(0x3b9aca00) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:127 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:761 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 24m40.529s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 24m40.127s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay after a capped delay (Step Runtime: 1m32.599s) test/e2e/common/node/pods.go:760 Spec Goroutine goroutine 1237 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc003914000, 0xc003ae3f00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002f5cb80, 0xc003ae3f00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc0037d4000?}, 0xc003ae3f00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc0037d4000, 0xc003ae3f00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc004f07800?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc005105260, 0xc003ae3e00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc004dcd000, 0xc003ae3d00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc003ae3d00, {0x7fad100, 0xc004dcd000}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc005105290, 0xc003ae3d00, {0x7fb2503235b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc005105290, 0xc003ae3d00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc003ae3b00, {0x7fe0bc8, 0xc0000820e0}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc003ae3b00, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc004d6c240, {0x7fe0bc8, 0xc0000820e0}, {0x75d2035, 0xc}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:128 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:761 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 25m0.531s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 25m0.129s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay after a capped delay (Step Runtime: 1m52.602s) test/e2e/common/node/pods.go:760 Spec Goroutine goroutine 1237 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc003914000, 0xc003ae3f00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002f5cb80, 0xc003ae3f00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc0037d4000?}, 0xc003ae3f00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc0037d4000, 0xc003ae3f00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc004f07800?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc005105260, 0xc003ae3e00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc004dcd000, 0xc003ae3d00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc003ae3d00, {0x7fad100, 0xc004dcd000}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc005105290, 0xc003ae3d00, {0x7fb2503235b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc005105290, 0xc003ae3d00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc003ae3b00, {0x7fe0bc8, 0xc0000820e0}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc003ae3b00, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc004d6c240, {0x7fe0bc8, 0xc0000820e0}, {0x75d2035, 0xc}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:128 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:761 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #1 Automatically polling progress: [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] (Spec Runtime: 25m20.537s) test/e2e/common/node/pods.go:717 In [It] (Node Runtime: 25m20.135s) test/e2e/common/node/pods.go:717 At [By Step] getting restart delay after a capped delay (Step Runtime: 2m12.608s) test/e2e/common/node/pods.go:760 Spec Goroutine goroutine 1237 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc003914000, 0xc003ae3f00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc002f5cb80, 0xc003ae3f00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc0037d4000?}, 0xc003ae3f00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc0037d4000, 0xc003ae3f00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc004f07800?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc005105260, 0xc003ae3e00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc004dcd000, 0xc003ae3d00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc003ae3d00, {0x7fad100, 0xc004dcd000}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc005105290, 0xc003ae3d00, {0x7fb2503235b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc005105290, 0xc003ae3d00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc003ae3b00, {0x7fe0bc8, 0xc0000820e0}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc003ae3b00, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Get(0xc004d6c240, {0x7fe0bc8, 0xc0000820e0}, {0x75d2035, 0xc}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:82 > k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:128 > k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:761 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fe6000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 25 16:31:00.700: INFO: Unexpected error: getting pod back-off-cap: <*url.Error | 0xc0049a0630>: { Op: "Get", URL: "https://35.197.125.133/api/v1/namespaces/pods-2946/pods/back-off-cap", Err: <http2.StreamError>{ StreamID: 1407, Code: 2, Cause: <*errors.errorString | 0xc0002877a0>{ s: "received from peer", }, }, } Nov 25 16:31:00.701: FAIL: getting pod back-off-cap: Get "https://35.197.125.133/api/v1/namespaces/pods-2946/pods/back-off-cap": stream error: stream ID 1407; INTERNAL_ERROR; received from peer Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc004ea6258, {0x75d2035, 0xc}, {0x75d2035, 0xc}) test/e2e/common/node/pods.go:129 +0x225 k8s.io/kubernetes/test/e2e/common/node.glob..func15.10() test/e2e/common/node/pods.go:761 +0x4c7 [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 Nov 25 16:31:00.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:31:12.486 STEP: Collecting events from namespace "pods-2946". 11/25/22 16:31:12.486 STEP: Found 8 events. 11/25/22 16:31:12.528 Nov 25 16:31:12.528: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for back-off-cap: { } Scheduled: Successfully assigned pods-2946/back-off-cap to bootstrap-e2e-minion-group-sp52 Nov 25 16:31:12.528: INFO: At 2022-11-25 16:06:20 +0000 UTC - event for back-off-cap: {kubelet bootstrap-e2e-minion-group-sp52} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-fj95z" : failed to sync configmap cache: timed out waiting for the condition Nov 25 16:31:12.528: INFO: At 2022-11-25 16:06:21 +0000 UTC - event for back-off-cap: {kubelet bootstrap-e2e-minion-group-sp52} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Nov 25 16:31:12.528: INFO: At 2022-11-25 16:06:21 +0000 UTC - event for back-off-cap: {kubelet bootstrap-e2e-minion-group-sp52} Created: Created container back-off-cap Nov 25 16:31:12.528: INFO: At 2022-11-25 16:06:23 +0000 UTC - event for back-off-cap: {kubelet bootstrap-e2e-minion-group-sp52} Started: Started container back-off-cap Nov 25 16:31:12.528: INFO: At 2022-11-25 16:06:29 +0000 UTC - event for back-off-cap: {kubelet bootstrap-e2e-minion-group-sp52} Killing: Stopping container back-off-cap Nov 25 16:31:12.528: INFO: At 2022-11-25 16:06:34 +0000 UTC - event for back-off-cap: {kubelet bootstrap-e2e-minion-group-sp52} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 16:31:12.528: INFO: At 2022-11-25 16:06:35 +0000 UTC - event for back-off-cap: {kubelet bootstrap-e2e-minion-group-sp52} BackOff: Back-off restarting failed container back-off-cap in pod back-off-cap_pods-2946(48b01b1d-31bb-41db-a6f1-bb3f698f004b) Nov 25 16:31:14.594: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 16:31:14.594: INFO: back-off-cap bootstrap-e2e-minion-group-sp52 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:06:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:28:37 +0000 UTC ContainersNotReady containers with unready status: [back-off-cap]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:28:37 +0000 UTC ContainersNotReady containers with unready status: [back-off-cap]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:06:17 +0000 UTC }] Nov 25 16:31:14.594: INFO: Nov 25 16:31:14.666: INFO: Unable to fetch pods-2946/back-off-cap/back-off-cap logs: an error on the server ("unknown") has prevented the request from succeeding (get pods back-off-cap) Nov 25 16:31:14.714: INFO: Logging node info for node bootstrap-e2e-master Nov 25 16:31:14.756: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 9cdf5595-019f-4ae3-b78d-0ecc5e3bede9 9486 0 2022-11-25 15:55:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 16:28:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:28:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:28:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:28:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:28:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.197.125.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ac2cf1840f1e13636aadd5beda3bd372,SystemUUID:ac2cf184-0f1e-1363-6aad-d5beda3bd372,BootID:561947ad-30a0-426d-bdea-6c654b08a7a1,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 16:31:14.757: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 16:31:14.827: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 16:31:14.870: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 16:31:14.870: INFO: Logging node info for node bootstrap-e2e-minion-group-6gq3 Nov 25 16:31:14.913: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6gq3 d9dd389c-0f83-4f5d-89ae-55a80abf1a2f 9630 0 2022-11-25 15:55:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6gq3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6gq3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-9587":"bootstrap-e2e-minion-group-6gq3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 16:07:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 16:26:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:31:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-6gq3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:26:24 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:26:24 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:26:24 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:26:24 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:26:24 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:26:24 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:26:24 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:26:39 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:26:39 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:26:39 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:26:39 +0000 UTC,LastTransitionTime:2022-11-25 15:55:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.145.38.169,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a65d19069fa2a7e527b61eb4bd24dd95,SystemUUID:a65d1906-9fa2-a7e5-27b6-1eb4bd24dd95,BootID:f0b831b9-bee5-4ef5-bc7f-65152df7ae5a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be,DevicePath:,},},Config:nil,},} Nov 25 16:31:14.914: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6gq3 Nov 25 16:31:14.958: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6gq3 Nov 25 16:31:15.010: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6gq3: error trying to reach service: No agent available Nov 25 16:31:15.010: INFO: Logging node info for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:31:15.053: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-9cl6 074fe96a-325f-4d5f-83a2-c840a04a6f6e 9554 0 2022-11-25 15:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-9cl6 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-9cl6 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-4283":"bootstrap-e2e-minion-group-9cl6"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 16:04:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-25 16:26:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:29:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-9cl6,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:26:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:26:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:26:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:26:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:26:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:26:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:26:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:28:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:28:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:28:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:28:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.203.132.179,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8858ca8f7f864c182ba49f423846650c,SystemUUID:8858ca8f-7f86-4c18-2ba4-9f423846650c,BootID:fbd96363-13a6-49de-a1fa-0e73a4570da5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 16:31:15.053: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:31:15.097: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-9cl6 Nov 25 16:31:15.140: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-9cl6: error trying to reach service: No agent available Nov 25 16:31:15.140: INFO: Logging node info for node bootstrap-e2e-minion-group-sp52 Nov 25 16:31:15.183: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-sp52 50f2d6f8-49b3-493a-a11a-263fafdd25f0 9635 0 2022-11-25 15:55:42 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-sp52 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-sp52 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volumemode-4057":"bootstrap-e2e-minion-group-sp52","csi-mock-csi-mock-volumes-9286":"bootstrap-e2e-minion-group-sp52"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 16:06:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 16:26:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:31:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-sp52,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:26:21 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:26:21 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:26:21 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:26:21 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:26:21 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:26:21 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:26:21 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:26:34 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:26:34 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:26:34 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:26:34 +0000 UTC,LastTransitionTime:2022-11-25 15:55:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.33.187,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4377d7377743ba64e8758a2f00cb7bc9,SystemUUID:4377d737-7743-ba64-e875-8a2f00cb7bc9,BootID:601334d8-63bd-4289-88cf-b3039f865736,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e,DevicePath:,},},Config:nil,},} Nov 25 16:31:15.183: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-sp52 Nov 25 16:31:15.227: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-sp52 Nov 25 16:31:15.270: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-sp52: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 STEP: Destroying namespace "pods-2946" for this suite. 11/25/22 16:31:15.271
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sPods\sshould\shave\stheir\sauto\-restart\sback\-off\stimer\sreset\son\simage\supdate\s\[Slow\]\[NodeConformance\]$'
test/e2e/common/node/pods.go:129 k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc001a003f0, {0x75f4694, 0x12}, {0x75c2b3f, 0x8}) test/e2e/common/node/pods.go:129 +0x225 k8s.io/kubernetes/test/e2e/common/node.glob..func15.9() test/e2e/common/node/pods.go:706 +0x41f There were additional failures detected after the initial failure: [FAILED] Nov 25 16:01:00.337: failed to list events in namespace "pods-2631": Get "https://35.197.125.133/api/v1/namespaces/pods-2631/events": dial tcp 35.197.125.133:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 16:01:00.377: Couldn't delete ns: "pods-2631": Delete "https://35.197.125.133/api/v1/namespaces/pods-2631": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/pods-2631", Err:(*net.OpError)(0xc001995040)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-node] Pods set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:57:14.738 Nov 25 15:57:14.738: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename pods 11/25/22 15:57:14.74 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:57:14.87 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:57:14.954 [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:194 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] test/e2e/common/node/pods.go:676 Nov 25 15:57:15.093: INFO: Waiting up to 5m0s for pod "pod-back-off-image" in namespace "pods-2631" to be "running and ready" Nov 25 15:57:15.150: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 57.208469ms Nov 25 15:57:15.150: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:57:17.193: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100262691s Nov 25 15:57:17.193: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:57:19.192: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098905373s Nov 25 15:57:19.192: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:57:21.204: INFO: Pod "pod-back-off-image": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110868903s Nov 25 15:57:21.204: INFO: The phase of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 25 15:57:23.195: INFO: Pod "pod-back-off-image": Phase="Running", Reason="", readiness=true. Elapsed: 8.101616942s Nov 25 15:57:23.195: INFO: The phase of Pod pod-back-off-image is Running (Ready = true) Nov 25 15:57:23.195: INFO: Pod "pod-back-off-image" satisfied condition "running and ready" STEP: getting restart delay-0 11/25/22 15:58:23.237 Nov 25 15:58:27.496: INFO: getRestartDelay: restartCount = 3, finishedAt=2022-11-25 15:57:54 +0000 UTC restartedAt=2022-11-25 15:58:20 +0000 UTC (26s) STEP: getting restart delay-1 11/25/22 15:58:27.496 Nov 25 15:59:09.329: INFO: getRestartDelay: restartCount = 4, finishedAt=2022-11-25 15:58:25 +0000 UTC restartedAt=2022-11-25 15:59:07 +0000 UTC (42s) STEP: getting restart delay-2 11/25/22 15:59:09.329 Nov 25 16:00:40.210: INFO: getRestartDelay: restartCount = 5, finishedAt=2022-11-25 15:59:12 +0000 UTC restartedAt=2022-11-25 16:00:38 +0000 UTC (1m26s) STEP: updating the image 11/25/22 16:00:40.21 Nov 25 16:00:40.807: INFO: Successfully updated pod "pod-back-off-image" Nov 25 16:00:50.807: INFO: Waiting up to 5m0s for pod "pod-back-off-image" in namespace "pods-2631" to be "running" Nov 25 16:00:50.849: INFO: Pod "pod-back-off-image": Phase="Running", Reason="", readiness=false. Elapsed: 41.845906ms Nov 25 16:00:50.849: INFO: Pod "pod-back-off-image" satisfied condition "running" STEP: get restart delay after image update 11/25/22 16:00:50.849 Nov 25 16:01:00.257: INFO: Unexpected error: getting pod pod-back-off-image: <*url.Error | 0xc001d3d410>: { Op: "Get", URL: "https://35.197.125.133/api/v1/namespaces/pods-2631/pods/pod-back-off-image", Err: <*net.OpError | 0xc0037d2460>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0035616e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 197, 125, 133], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001228620>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 16:01:00.257: FAIL: getting pod pod-back-off-image: Get "https://35.197.125.133/api/v1/namespaces/pods-2631/pods/pod-back-off-image": dial tcp 35.197.125.133:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.getRestartDelay(0xc001a003f0, {0x75f4694, 0x12}, {0x75c2b3f, 0x8}) test/e2e/common/node/pods.go:129 +0x225 k8s.io/kubernetes/test/e2e/common/node.glob..func15.9() test/e2e/common/node/pods.go:706 +0x41f [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 Nov 25 16:01:00.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:01:00.297 STEP: Collecting events from namespace "pods-2631". 11/25/22 16:01:00.298 Nov 25 16:01:00.337: INFO: Unexpected error: failed to list events in namespace "pods-2631": <*url.Error | 0xc001d3d890>: { Op: "Get", URL: "https://35.197.125.133/api/v1/namespaces/pods-2631/events", Err: <*net.OpError | 0xc0037d2690>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003561cb0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 197, 125, 133], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc001228bc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 16:01:00.337: FAIL: failed to list events in namespace "pods-2631": Get "https://35.197.125.133/api/v1/namespaces/pods-2631/events": dial tcp 35.197.125.133:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0014ee5c0, {0xc00127c4c0, 0x9}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0011dd860}, {0xc00127c4c0, 0x9}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0014ee650?, {0xc00127c4c0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0002750e0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001499480?, 0xc0041cbfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc000524a48?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001499480?, 0x29449fc?}, {0xae73300?, 0xc0041cbf80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 STEP: Destroying namespace "pods-2631" for this suite. 11/25/22 16:01:00.338 Nov 25 16:01:00.377: FAIL: Couldn't delete ns: "pods-2631": Delete "https://35.197.125.133/api/v1/namespaces/pods-2631": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/pods-2631", Err:(*net.OpError)(0xc001995040)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0002750e0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0014993e0?, 0xc0041e0fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0014993e0?, 0x0?}, {0xae73300?, 0x5?, 0xc000e5e6f0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sVariable\sExpansion\sshould\sfail\ssubstituting\svalues\sin\sa\svolume\ssubpath\swith\sbackticks\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0002ec870) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:02:00.941 Nov 25 16:02:00.941: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename var-expansion 11/25/22 16:02:00.943 Nov 25 16:02:00.983: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:04:05.520: INFO: Unexpected error: <*fmt.wrapError | 0xc003d9c040>: { msg: "wait for service account \"default\" in namespace \"var-expansion-3965\": timed out waiting for the condition", err: <*errors.errorString | 0xc0001fda30>{ s: "timed out waiting for the condition", }, } Nov 25 16:04:05.520: FAIL: wait for service account "default" in namespace "var-expansion-3965": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0002ec870) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 25 16:04:05.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:04:05.604 STEP: Collecting events from namespace "var-expansion-3965". 11/25/22 16:04:05.604 STEP: Found 0 events. 11/25/22 16:04:25.213 Nov 25 16:04:25.280: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 16:04:25.280: INFO: Nov 25 16:04:25.420: INFO: Logging node info for node bootstrap-e2e-master Nov 25 16:04:25.466: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 9cdf5595-019f-4ae3-b78d-0ecc5e3bede9 3445 0 2022-11-25 15:55:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 16:02:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.197.125.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ac2cf1840f1e13636aadd5beda3bd372,SystemUUID:ac2cf184-0f1e-1363-6aad-d5beda3bd372,BootID:561947ad-30a0-426d-bdea-6c654b08a7a1,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 16:04:25.467: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 16:04:25.575: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 16:04:25.707: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 16:04:25.707: INFO: Logging node info for node bootstrap-e2e-minion-group-6gq3 Nov 25 16:04:25.754: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6gq3 d9dd389c-0f83-4f5d-89ae-55a80abf1a2f 3768 0 2022-11-25 15:55:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6gq3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6gq3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-9498":"bootstrap-e2e-minion-group-6gq3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 15:57:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 16:00:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:04:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-6gq3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:03:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:03:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:03:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:03:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.145.38.169,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a65d19069fa2a7e527b61eb4bd24dd95,SystemUUID:a65d1906-9fa2-a7e5-27b6-1eb4bd24dd95,BootID:f0b831b9-bee5-4ef5-bc7f-65152df7ae5a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be,DevicePath:,},},Config:nil,},} Nov 25 16:04:25.755: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6gq3 Nov 25 16:04:25.811: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6gq3 Nov 25 16:04:25.894: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6gq3: error trying to reach service: No agent available Nov 25 16:04:25.894: INFO: Logging node info for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:04:25.951: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-9cl6 074fe96a-325f-4d5f-83a2-c840a04a6f6e 3697 0 2022-11-25 15:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-9cl6 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-9cl6 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8031":"bootstrap-e2e-minion-group-9cl6"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 15:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 16:00:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:03:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-9cl6,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.203.132.179,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8858ca8f7f864c182ba49f423846650c,SystemUUID:8858ca8f-7f86-4c18-2ba4-9f423846650c,BootID:fbd96363-13a6-49de-a1fa-0e73a4570da5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8031^2e633bf6-6cda-11ed-a8be-5a65049ea7a3,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8031^2d80c2fb-6cda-11ed-a8be-5a65049ea7a3,DevicePath:,},},Config:nil,},} Nov 25 16:04:25.952: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:04:26.010: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-9cl6 Nov 25 16:04:26.082: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-9cl6: error trying to reach service: No agent available Nov 25 16:04:26.082: INFO: Logging node info for node bootstrap-e2e-minion-group-sp52 Nov 25 16:04:26.125: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-sp52 50f2d6f8-49b3-493a-a11a-263fafdd25f0 3722 0 2022-11-25 15:55:42 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-sp52 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-sp52 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-4245":"bootstrap-e2e-minion-group-sp52"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 15:59:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 16:00:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:03:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-sp52,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.33.187,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4377d7377743ba64e8758a2f00cb7bc9,SystemUUID:4377d737-7743-ba64-e875-8a2f00cb7bc9,BootID:601334d8-63bd-4289-88cf-b3039f865736,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e,DevicePath:,},},Config:nil,},} Nov 25 16:04:26.126: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-sp52 Nov 25 16:04:26.209: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-sp52 Nov 25 16:04:26.268: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-sp52: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 STEP: Destroying namespace "var-expansion-3965" for this suite. 11/25/22 16:04:26.268
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sVariable\sExpansion\sshould\ssucceed\sin\swriting\ssubpaths\sin\scontainer\s\[Slow\]\s\[Conformance\]$'
test/e2e/common/node/expansion.go:348 k8s.io/kubernetes/test/e2e/common/node.glob..func7.8() test/e2e/common/node/expansion.go:348 +0x4b2from junit_01.xml
[BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:04:26.701 Nov 25 16:04:26.702: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename var-expansion 11/25/22 16:04:26.704 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 16:04:44.83 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 16:04:44.935 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should succeed in writing subpaths in container [Slow] [Conformance] test/e2e/common/node/expansion.go:297 STEP: creating the pod 11/25/22 16:04:45.159 STEP: waiting for pod running 11/25/22 16:04:45.288 Nov 25 16:04:45.288: INFO: Waiting up to 2m0s for pod "var-expansion-07e93d59-a015-44f3-9fba-6604e8291733" in namespace "var-expansion-5731" to be "running" Nov 25 16:04:45.364: INFO: Pod "var-expansion-07e93d59-a015-44f3-9fba-6604e8291733": Phase="Pending", Reason="", readiness=false. Elapsed: 75.687715ms Nov 25 16:04:47.406: INFO: Pod "var-expansion-07e93d59-a015-44f3-9fba-6604e8291733": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117877804s Nov 25 16:04:49.423: INFO: Pod "var-expansion-07e93d59-a015-44f3-9fba-6604e8291733": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134597801s Nov 25 16:04:51.421: INFO: Pod "var-expansion-07e93d59-a015-44f3-9fba-6604e8291733": Phase="Running", Reason="", readiness=true. Elapsed: 6.133007726s Nov 25 16:04:51.421: INFO: Pod "var-expansion-07e93d59-a015-44f3-9fba-6604e8291733" satisfied condition "running" STEP: creating a file in subpath 11/25/22 16:04:51.421 Nov 25 16:04:51.480: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-5731 PodName:var-expansion-07e93d59-a015-44f3-9fba-6604e8291733 ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 16:04:51.480: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 16:04:51.482: INFO: ExecWithOptions: Clientset creation Nov 25 16:04:51.482: INFO: ExecWithOptions: execute(POST https://35.197.125.133/api/v1/namespaces/var-expansion-5731/pods/var-expansion-07e93d59-a015-44f3-9fba-6604e8291733/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) Nov 25 16:04:51.739: FAIL: expected to be able to write to subpath Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.glob..func7.8() test/e2e/common/node/expansion.go:348 +0x4b2 [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 25 16:04:51.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:04:51.825 STEP: Collecting events from namespace "var-expansion-5731". 11/25/22 16:04:51.825 STEP: Found 4 events. 11/25/22 16:04:51.902 Nov 25 16:04:51.902: INFO: At 2022-11-25 16:04:45 +0000 UTC - event for var-expansion-07e93d59-a015-44f3-9fba-6604e8291733: {default-scheduler } Scheduled: Successfully assigned var-expansion-5731/var-expansion-07e93d59-a015-44f3-9fba-6604e8291733 to bootstrap-e2e-minion-group-sp52 Nov 25 16:04:51.902: INFO: At 2022-11-25 16:04:46 +0000 UTC - event for var-expansion-07e93d59-a015-44f3-9fba-6604e8291733: {kubelet bootstrap-e2e-minion-group-sp52} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Nov 25 16:04:51.902: INFO: At 2022-11-25 16:04:46 +0000 UTC - event for var-expansion-07e93d59-a015-44f3-9fba-6604e8291733: {kubelet bootstrap-e2e-minion-group-sp52} Created: Created container dapi-container Nov 25 16:04:51.902: INFO: At 2022-11-25 16:04:46 +0000 UTC - event for var-expansion-07e93d59-a015-44f3-9fba-6604e8291733: {kubelet bootstrap-e2e-minion-group-sp52} Started: Started container dapi-container Nov 25 16:04:51.977: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 16:04:51.977: INFO: var-expansion-07e93d59-a015-44f3-9fba-6604e8291733 bootstrap-e2e-minion-group-sp52 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 16:04:45 +0000 UTC }] Nov 25 16:04:51.977: INFO: Nov 25 16:04:52.063: INFO: Unable to fetch var-expansion-5731/var-expansion-07e93d59-a015-44f3-9fba-6604e8291733/dapi-container logs: an error on the server ("unknown") has prevented the request from succeeding (get pods var-expansion-07e93d59-a015-44f3-9fba-6604e8291733) Nov 25 16:04:52.243: INFO: Logging node info for node bootstrap-e2e-master Nov 25 16:04:52.352: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 9cdf5595-019f-4ae3-b78d-0ecc5e3bede9 3445 0 2022-11-25 15:55:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 16:02:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.197.125.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ac2cf1840f1e13636aadd5beda3bd372,SystemUUID:ac2cf184-0f1e-1363-6aad-d5beda3bd372,BootID:561947ad-30a0-426d-bdea-6c654b08a7a1,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 16:04:52.352: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 16:04:52.469: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 16:04:52.611: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 25 16:04:52.611: INFO: Logging node info for node bootstrap-e2e-minion-group-6gq3 Nov 25 16:04:52.683: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6gq3 d9dd389c-0f83-4f5d-89ae-55a80abf1a2f 4249 0 2022-11-25 15:55:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6gq3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6gq3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-5824":"bootstrap-e2e-minion-group-6gq3","csi-hostpath-provisioning-9498":"bootstrap-e2e-minion-group-6gq3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 15:57:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 16:00:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:04:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-6gq3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:41 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:03:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:03:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:03:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:03:27 +0000 UTC,LastTransitionTime:2022-11-25 15:55:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.145.38.169,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6gq3.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a65d19069fa2a7e527b61eb4bd24dd95,SystemUUID:a65d1906-9fa2-a7e5-27b6-1eb4bd24dd95,BootID:f0b831b9-bee5-4ef5-bc7f-65152df7ae5a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9498^de1c8e1f-6cd9-11ed-8076-62d24cb487be,DevicePath:,},},Config:nil,},} Nov 25 16:04:52.684: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6gq3 Nov 25 16:04:52.756: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6gq3 Nov 25 16:04:53.066: INFO: volume-snapshot-controller-0 started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.066: INFO: Container volume-snapshot-controller ready: false, restart count 5 Nov 25 16:04:53.066: INFO: pod-subpath-test-inlinevolume-x9j8 started at <nil> (0+0 container statuses recorded) Nov 25 16:04:53.066: INFO: hostexec-bootstrap-e2e-minion-group-6gq3-txv7z started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.066: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 16:04:53.066: INFO: hostexec-bootstrap-e2e-minion-group-6gq3-cvnzt started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.066: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 16:04:53.066: INFO: pod-subpath-test-inlinevolume-5xqv started at 2022-11-25 16:04:44 +0000 UTC (1+1 container statuses recorded) Nov 25 16:04:53.066: INFO: Init container init-volume-inlinevolume-5xqv ready: true, restart count 0 Nov 25 16:04:53.066: INFO: Container test-container-subpath-inlinevolume-5xqv ready: false, restart count 0 Nov 25 16:04:53.066: INFO: hostexec-bootstrap-e2e-minion-group-6gq3-gc2wc started at 2022-11-25 15:57:32 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.066: INFO: Container agnhost-container ready: true, restart count 1 Nov 25 16:04:53.066: INFO: coredns-6d97d5ddb-6vwlx started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.066: INFO: Container coredns ready: false, restart count 5 Nov 25 16:04:53.066: INFO: pod-d72e7e0e-8a03-47fa-99d8-581e9d66b5d0 started at 2022-11-25 15:57:38 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.066: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:04:53.066: INFO: local-io-client started at 2022-11-25 15:57:38 +0000 UTC (1+1 container statuses recorded) Nov 25 16:04:53.066: INFO: Init container local-io-init ready: true, restart count 0 Nov 25 16:04:53.066: INFO: Container local-io-client ready: true, restart count 0 Nov 25 16:04:53.066: INFO: pod-e7b3e21a-23a0-4c71-a7b3-cb901c2491f8 started at 2022-11-25 15:57:37 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.066: INFO: Container write-pod ready: false, restart count 0 Nov 25 16:04:53.066: INFO: lb-internal-cg5nh started at 2022-11-25 15:59:53 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.066: INFO: Container netexec ready: true, restart count 3 Nov 25 16:04:53.066: INFO: httpd started at 2022-11-25 16:04:45 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.066: INFO: Container httpd ready: false, restart count 0 Nov 25 16:04:53.066: INFO: csi-hostpathplugin-0 started at 2022-11-25 16:04:48 +0000 UTC (0+7 container statuses recorded) Nov 25 16:04:53.066: INFO: Container csi-attacher ready: false, restart count 0 Nov 25 16:04:53.066: INFO: Container csi-provisioner ready: false, restart count 0 Nov 25 16:04:53.066: INFO: Container csi-resizer ready: false, restart count 0 Nov 25 16:04:53.066: INFO: Container csi-snapshotter ready: false, restart count 0 Nov 25 16:04:53.066: INFO: Container hostpath ready: false, restart count 0 Nov 25 16:04:53.066: INFO: Container liveness-probe ready: false, restart count 0 Nov 25 16:04:53.066: INFO: Container node-driver-registrar ready: false, restart count 0 Nov 25 16:04:53.066: INFO: hostexec-bootstrap-e2e-minion-group-6gq3-cvz5k started at 2022-11-25 16:04:46 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.066: INFO: Container agnhost-container ready: false, restart count 0 Nov 25 16:04:53.066: INFO: metadata-proxy-v0.1-ch4s9 started at 2022-11-25 15:55:38 +0000 UTC (0+2 container statuses recorded) Nov 25 16:04:53.066: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 16:04:53.066: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 16:04:53.066: INFO: kube-dns-autoscaler-5f6455f985-4h7dq started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.066: INFO: Container autoscaler ready: false, restart count 4 Nov 25 16:04:53.066: INFO: csi-hostpathplugin-0 started at 2022-11-25 15:57:17 +0000 UTC (0+7 container statuses recorded) Nov 25 16:04:53.066: INFO: Container csi-attacher ready: true, restart count 4 Nov 25 16:04:53.066: INFO: Container csi-provisioner ready: true, restart count 4 Nov 25 16:04:53.066: INFO: Container csi-resizer ready: true, restart count 4 Nov 25 16:04:53.066: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 25 16:04:53.066: INFO: Container hostpath ready: true, restart count 4 Nov 25 16:04:53.066: INFO: Container liveness-probe ready: true, restart count 4 Nov 25 16:04:53.066: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 25 16:04:53.067: INFO: kube-proxy-bootstrap-e2e-minion-group-6gq3 started at 2022-11-25 15:55:37 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.067: INFO: Container kube-proxy ready: false, restart count 5 Nov 25 16:04:53.067: INFO: l7-default-backend-8549d69d99-m478x started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.067: INFO: Container default-http-backend ready: true, restart count 0 Nov 25 16:04:53.067: INFO: konnectivity-agent-prjfw started at 2022-11-25 15:55:44 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.067: INFO: Container konnectivity-agent ready: false, restart count 4 Nov 25 16:04:53.067: INFO: pod-subpath-test-dynamicpv-vf5q started at 2022-11-25 15:57:32 +0000 UTC (1+2 container statuses recorded) Nov 25 16:04:53.067: INFO: Init container init-volume-dynamicpv-vf5q ready: true, restart count 2 Nov 25 16:04:53.067: INFO: Container test-container-subpath-dynamicpv-vf5q ready: true, restart count 2 Nov 25 16:04:53.067: INFO: Container test-container-volume-dynamicpv-vf5q ready: true, restart count 2 Nov 25 16:04:53.067: INFO: pod-subpath-test-preprovisionedpv-zrc5 started at 2022-11-25 15:57:37 +0000 UTC (1+1 container statuses recorded) Nov 25 16:04:53.067: INFO: Init container init-volume-preprovisionedpv-zrc5 ready: true, restart count 0 Nov 25 16:04:53.067: INFO: Container test-container-subpath-preprovisionedpv-zrc5 ready: false, restart count 0 Nov 25 16:04:53.067: INFO: test-container-pod started at 2022-11-25 16:00:36 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.067: INFO: Container webserver ready: true, restart count 2 Nov 25 16:04:53.067: INFO: external-provisioner-grbwx started at 2022-11-25 16:00:07 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.067: INFO: Container nfs-provisioner ready: false, restart count 2 Nov 25 16:04:53.067: INFO: hostexec-bootstrap-e2e-minion-group-6gq3-vnjns started at 2022-11-25 15:57:15 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.067: INFO: Container agnhost-container ready: true, restart count 2 Nov 25 16:04:53.067: INFO: netserver-0 started at 2022-11-25 15:59:53 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.067: INFO: Container webserver ready: true, restart count 2 Nov 25 16:04:53.067: INFO: affinity-lb-esipp-transition-b9tsx started at 2022-11-25 16:04:45 +0000 UTC (0+1 container statuses recorded) Nov 25 16:04:53.067: INFO: Container affinity-lb-esipp-transition ready: false, restart count 0 Nov 25 16:04:53.796: INFO: Logging node info for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:04:53.865: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-9cl6 074fe96a-325f-4d5f-83a2-c840a04a6f6e 4268 0 2022-11-25 15:55:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-9cl6 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-9cl6 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8031":"bootstrap-e2e-minion-group-9cl6","csi-mock-csi-mock-volumes-5257":"bootstrap-e2e-minion-group-9cl6"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 16:00:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 16:04:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 16:04:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-9cl6,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:42 +0000 UTC,LastTransitionTime:2022-11-25 15:55:40 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:44 +0000 UTC,LastTransitionTime:2022-11-25 15:55:44 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:00:12 +0000 UTC,LastTransitionTime:2022-11-25 15:55:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.203.132.179,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-9cl6.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8858ca8f7f864c182ba49f423846650c,SystemUUID:8858ca8f-7f86-4c18-2ba4-9f423846650c,BootID:fbd96363-13a6-49de-a1fa-0e73a4570da5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 16:04:53.866: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-9cl6 Nov 25 16:04:53.935: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-9cl6 Nov 25 16:04:54.025: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-9cl6: error trying to reach service: No agent available Nov 25 16:04:54.025: INFO: Logging node info for node bootstrap-e2e-minion-group-sp52 Nov 25 16:04:54.080: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-sp52 50f2d6f8-49b3-493a-a11a-263fafdd25f0 3873 0 2022-11-25 15:55:42 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-sp52 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-sp52 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-4245":"bootstrap-e2e-minion-group-sp52"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 15:55:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 15:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-25 15:59:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-25 16:00:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-25 16:04:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-2/us-west1-b/bootstrap-e2e-minion-group-sp52,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 16:00:46 +0000 UTC,LastTransitionTime:2022-11-25 15:55:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 15:55:55 +0000 UTC,LastTransitionTime:2022-11-25 15:55:55 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 16:02:05 +0000 UTC,LastTransitionTime:2022-11-25 15:55:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.33.187,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-sp52.c.k8s-jkns-gci-gce-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4377d7377743ba64e8758a2f00cb7bc9,SystemUUID:4377d737-7743-ba64-e875-8a2f00cb7bc9,BootID:601334d8-63bd-4289-88cf-b3039f865736,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-4245^11190c3a-6cda-11ed-a094-9254b624d57d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4816^db801730-6cd9-11ed-a3b2-826b42a3050e,DevicePath:,},},Config:nil,},} Nov 25 16:04:54.081: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-sp52 Nov 25 16:04:54.181: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-sp52 Nov 25 16:04:54.341: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-sp52: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 STEP: Destroying namespace "var-expansion-5731" for this suite. 11/25/22 16:04:54.341
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sVariable\sExpansion\sshould\sverify\sthat\sa\sfailing\ssubpath\sexpansion\scan\sbe\smodified\sduring\sthe\slifecycle\sof\sa\scontainer\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/pod/pod_client.go:134 k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).Update(0xc0015a2eb8?, {0xc0015f7600?, 0x32?}, 0x78958b0?) test/e2e/framework/pod/pod_client.go:134 +0xd5 k8s.io/kubernetes/test/e2e/common/node.glob..func7.7() test/e2e/common/node/expansion.go:272 +0x3e6 There were additional failures detected after the initial failure: [FAILED] Nov 25 15:57:40.912: failed to list events in namespace "var-expansion-6189": Get "https://35.197.125.133/api/v1/namespaces/var-expansion-6189/events": dial tcp 35.197.125.133:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 15:57:40.952: Couldn't delete ns: "var-expansion-6189": Delete "https://35.197.125.133/api/v1/namespaces/var-expansion-6189": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/var-expansion-6189", Err:(*net.OpError)(0xc002b56a50)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 15:57:35.671 Nov 25 15:57:35.671: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename var-expansion 11/25/22 15:57:35.672 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 15:57:35.961 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 15:57:36.073 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/common/node/expansion.go:225 STEP: creating the pod with failed condition 11/25/22 15:57:36.156 Nov 25 15:57:36.211: INFO: Waiting up to 2m0s for pod "var-expansion-74796279-6d49-4797-96b3-e1ccd39e019f" in namespace "var-expansion-6189" to be "running" Nov 25 15:57:36.252: INFO: Pod "var-expansion-74796279-6d49-4797-96b3-e1ccd39e019f": Phase="Pending", Reason="", readiness=false. Elapsed: 41.004014ms Nov 25 15:57:38.292: INFO: Pod "var-expansion-74796279-6d49-4797-96b3-e1ccd39e019f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08169846s Nov 25 15:57:40.292: INFO: Encountered non-retryable error while getting pod var-expansion-6189/var-expansion-74796279-6d49-4797-96b3-e1ccd39e019f: Get "https://35.197.125.133/api/v1/namespaces/var-expansion-6189/pods/var-expansion-74796279-6d49-4797-96b3-e1ccd39e019f": dial tcp 35.197.125.133:443: connect: connection refused STEP: updating the pod 11/25/22 15:57:40.292 Nov 25 15:57:40.832: INFO: Unexpected error: <*errors.errorString | 0xc0010b3bc0>: { s: "failed to get pod \"var-expansion-74796279-6d49-4797-96b3-e1ccd39e019f\": Get \"https://35.197.125.133/api/v1/namespaces/var-expansion-6189/pods/var-expansion-74796279-6d49-4797-96b3-e1ccd39e019f\": dial tcp 35.197.125.133:443: connect: connection refused", } Nov 25 15:57:40.832: FAIL: failed to get pod "var-expansion-74796279-6d49-4797-96b3-e1ccd39e019f": Get "https://35.197.125.133/api/v1/namespaces/var-expansion-6189/pods/var-expansion-74796279-6d49-4797-96b3-e1ccd39e019f": dial tcp 35.197.125.133:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).Update(0xc0015a2eb8?, {0xc0015f7600?, 0x32?}, 0x78958b0?) test/e2e/framework/pod/pod_client.go:134 +0xd5 k8s.io/kubernetes/test/e2e/common/node.glob..func7.7() test/e2e/common/node/expansion.go:272 +0x3e6 [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 25 15:57:40.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 15:57:40.872 STEP: Collecting events from namespace "var-expansion-6189". 11/25/22 15:57:40.873 Nov 25 15:57:40.912: INFO: Unexpected error: failed to list events in namespace "var-expansion-6189": <*url.Error | 0xc002b1a720>: { Op: "Get", URL: "https://35.197.125.133/api/v1/namespaces/var-expansion-6189/events", Err: <*net.OpError | 0xc0038ce2d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001a303f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 197, 125, 133], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003bdace0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 25 15:57:40.912: FAIL: failed to list events in namespace "var-expansion-6189": Get "https://35.197.125.133/api/v1/namespaces/var-expansion-6189/events": dial tcp 35.197.125.133:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0016ba5c0, {0xc000807488, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc000d77ba0}, {0xc000807488, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0016ba650?, {0xc000807488?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00036b680) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc0014a8e80?, 0xc002d56fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc000e19c28?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0014a8e80?, 0x29449fc?}, {0xae73300?, 0xc002d56f80?, 0x2fdb5c0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 STEP: Destroying namespace "var-expansion-6189" for this suite. 11/25/22 15:57:40.913 Nov 25 15:57:40.952: FAIL: Couldn't delete ns: "var-expansion-6189": Delete "https://35.197.125.133/api/v1/namespaces/var-expansion-6189": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/var-expansion-6189", Err:(*net.OpError)(0xc002b56a50)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00036b680) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc0014a8d60?, 0x9?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x9?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc0014a8d60?, 0xc0022d9500?}, {0xae73300?, 0x9?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\saccess\sto\stwo\svolumes\swith\sdifferent\svolume\smode\sand\sretain\sdata\sacross\spod\srecreation\son\sdifferent\snode$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000ef13b0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:11:31.821 Nov 25 16:11:31.821: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename multivolume 11/25/22 16:11:31.822 Nov 25 16:11:31.861: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:33.901: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:35.901: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:37.901: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:39.901: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:41.901: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:43.902: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:45.901: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:47.901: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:49.901: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:51.902: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:53.901: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:55.902: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:57.901: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:11:59.901: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:01.902: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:01.941: INFO: Unexpected error while creating namespace: Post "https://35.197.125.133/api/v1/namespaces": dial tcp 35.197.125.133:443: connect: connection refused Nov 25 16:12:01.941: INFO: Unexpected error: <*errors.errorString | 0xc0001fd9a0>: { s: "timed out waiting for the condition", } Nov 25 16:12:01.941: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000ef13b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] test/e2e/framework/node/init/init.go:32 Nov 25 16:12:01.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/25/22 16:12:01.981 [DeferCleanup (Each)] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\saccess\sto\stwo\svolumes\swith\sdifferent\svolume\smode\sand\sretain\sdata\sacross\spod\srecreation\son\sthe\ssame\snode$'
test/e2e/framework/volume/fixtures.go:668 k8s.io/kubernetes/test/e2e/framework/volume.VerifyExecInPodSucceed(0x75cb450?, 0xa?, {0xc00164ded8, 0x14}) test/e2e/framework/volume/fixtures.go:668 +0x392 k8s.io/kubernetes/test/e2e/framework/volume.CheckVolumeModeOfPath(0xc003202100?, 0x3e?, {0xc0030d3890?, 0x0?}, {0xc0048f8660, 0xc}) test/e2e/framework/volume/fixtures.go:636 +0xa9 k8s.io/kubernetes/test/e2e/storage/testsuites.testAccessMultipleVolumes(0xc0010d33b0, {0x801de88, 0xc002f8a4e0}, {0xc002f08c80, 0x10}, {{0xc0010ea520?, 0xc0020ffcd8?}, 0x0?, 0x0?}, {0xc0010a73f0, ...}, ...) test/e2e/storage/testsuites/multivolume.go:504 +0x630 k8s.io/kubernetes/test/e2e/storage/testsuites.TestAccessMultipleVolumesAcrossPodRecreation(0x75c5269?, {0x801de88, 0xc002f8a4e0}, {0xc002f08c80, 0x10}, {{0xc0010ea520, 0x1f}, 0x0, 0x0}, {0xc0010a73f0, ...}, ...) test/e2e/storage/testsuites/multivolume.go:533 +0x1b1 k8s.io/kubernetes/test/e2e/storage/testsuites.(*multiVolumeTestSuite).DefineTests.func5() test/e2e/storage/testsuites/multivolume.go:236 +0x485 There were additional failures detected after the initial failure: [FAILED] Nov 25 16:11:00.221: failed to list events in namespace "multivolume-9587-360": Get "https://35.197.125.133/api/v1/namespaces/multivolume-9587-360/events": dial tcp 35.197.125.133:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 16:11:00.304: failed to list events in namespace "multivolume-9587": Get "https://35.197.125.133/api/v1/namespaces/multivolume-9587/events": dial tcp 35.197.125.133:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 25 16:11:00.384: Couldn't delete ns: "multivolume-9587-360": Delete "https://35.197.125.133/api/v1/namespaces/multivolume-9587-360": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/multivolume-9587-360", Err:(*net.OpError)(0xc002fb90e0)}),Couldn't delete ns: "multivolume-9587": Delete "https://35.197.125.133/api/v1/namespaces/multivolume-9587": dial tcp 35.197.125.133:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.197.125.133/api/v1/namespaces/multivolume-9587", Err:(*net.OpError)(0xc0046d9540)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/25/22 16:06:37.153 Nov 25 16:06:37.154: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename multivolume 11/25/22 16:06:37.156 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 16:06:37.433 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/25/22 16:06:37.527 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] test/e2e/framework/metrics/init/init.go:31 [It] should access to two volumes with different volume mode and retain data across pod recreation on the same node test/e2e/storage/testsuites/multivolume.go:206 STEP: Building a driver namespace object, basename multivolume-9587 11/25/22 16:06:37.658 STEP: Waiting for a default service account to be provisioned in namespace 11/25/22 16:06:37.871 STEP: deploying csi-hostpath driver 11/25/22 16:06:37.971 Nov 25 16:06:38.179: INFO: creating *v1.ServiceAccount: multivolume-9587-360/csi-attacher Nov 25 16:06:38.246: INFO: creating *v1.ClusterRole: external-attacher-runner-multivolume-9587 Nov 25 16:06:38.246: INFO: Define cluster role external-attacher-runner-multivolume-9587 Nov 25 16:06:38.311: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-multivolume-9587 Nov 25 16:06:38.381: INFO: creating *v1.Role: multivolume-9587-360/external-attacher-cfg-multivolume-9587 Nov 25 16:06:38.440: INFO: creating *v1.RoleBinding: multivolume-9587-360/csi-attacher-role-cfg Nov 25 16:06:38.502: INFO: creating *v1.ServiceAccount: multivolume-9587-360/csi-provisioner Nov 25 16:06:38.553: INFO: creating *v1.ClusterRole: external-provisioner-runner-multivolume-9587 Nov 25 16:06:38.553: INFO: Define cluster role external-provisioner-runner-multivolume-9587 Nov 25 16:06:38.628: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-multivolume-9587 Nov 25 16:06:38.694: INFO: creating *v1.Role: multivolume-9587-360/external-provisioner-cfg-multivolume-9587 Nov 25 16:06:38.864: INFO: creating *v1.RoleBinding: multivolume-9587-360/csi-provisioner-role-cfg Nov 25 16:06:38.980: INFO: creating *v1.ServiceAccount: multivolume-9587-360/csi-snapshotter Nov 25 16:06:39.100: INFO: creating *v1.ClusterRole: external-snapshotter-runner-multivolume-9587 Nov 25 16:06:39.100: INFO: Define cluster role external-snapshotter-runner-multivolume-9587 Nov 25 16:06:39.335: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-multivolume-9587 Nov 25 16:06:39.584: INFO: creating *v1.Role: multivolume-9587-360/external-snapshotter-leaderelection-multivolume-9587 Nov 25 16:06:39.732: INFO: creating *v1.RoleBinding: multivolume-9587-360/external-snapshotter-leaderelection Nov 25 16:06:39.841: INFO: creating *v1.ServiceAccount: multivolume-9587-360/csi-external-health-monitor-controller Nov 25 16:06:39.913: INFO: creating *v1.ClusterRole: external-health-monitor-controller-runner-multivolume-9587 Nov 25 16:06:39.913: INFO: Define cluster role external-health-monitor-controller-runner-multivolume-9587 Nov 25 16:06:39.973: INFO: creating *v1.ClusterRoleBinding: csi-external-health-monitor-controller-role-multivolume-9587 Nov 25 16:06:40.074: INFO: creating *v1.Role: multivolume-9587-360/external-health-monitor-controller-cfg-multivolume-9587 Nov 25 16:06:40.170: INFO: creating *v1.RoleBinding: multivolume-9587-360/csi-external-health-monitor-controller-role-cfg Nov 25 16:06:40.259: INFO: creating *v1.ServiceAccount: multivolume-9587-360/csi-resizer Nov 25 16:06:40.332: INFO: creating *v1.ClusterRole: external-resizer-runner-multivolume-9587 Nov 25 16:06:40.332: INFO: Define cluster role external-resizer-runner-multivolume-9587 Nov 25 16:06:40.429: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-multivolume-9587 Nov 25 16:06:40.549: INFO: creating *v1.Role: multivolume-9587-360/external-resizer-cfg-multivolume-9587 Nov 25 16:06:40.636: INFO: creating *v1.RoleBinding: multivolume-9587-360/csi-resizer-role-cfg Nov 25 16:06:40.747: INFO: creating *v1.CSIDriver: csi-hostpath-multivolume-9587 Nov 25 16:06:40.812: INFO: creating *v1.ServiceAccount: multivolume-9587-360/csi-hostpathplugin-sa Nov 25 16:06:40.907: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-attacher-cluster-role-multivolume-9587 Nov 25 16:06:40.953: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-health-monitor-controller-cluster-role-multivolume-9587 Nov 25 16:06:41.026: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-provisioner-cluster-role-multivolume-9587 Nov 25 16:06:41.091: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-resizer-cluster-role-multivolume-9587 Nov 25 16:06:41.143: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-snapshotter-cluster-role-multivolume-9587 Nov 25 16:06:41.221: INFO: creating *v1.RoleBinding: multivolume-9587-360/csi-hostpathplugin-attacher-role Nov 25 16:06:41.298: INFO: creating *v1.RoleBinding: multivolume-9587-360/csi-hostpathplugin-health-monitor-controller-role Nov 25 16:06:41.357: INFO: creating *v1.RoleBinding: multivolume-9587-360/csi-hostpathplugin-provisioner-role Nov 25 16:06:41.406: INFO: creating *v1.RoleBinding: multivolume-9587-360/csi-hostpathplugin-resizer-role Nov 25 16:06:41.495: INFO: creating *v1.RoleBinding: multivolume-9587-360/csi-hostpathplugin-snapshotter-role Nov 25 16:06:41.558: INFO: creating *v1.StatefulSet: multivolume-9587-360/csi-hostpathplugin Nov 25 16:06:41.627: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-multivolume-9587 Nov 25 16:06:41.701: INFO: Creating resource for dynamic PV Nov 25 16:06:41.701: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(csi-hostpath) supported size:{ 1Mi} STEP: creating a StorageClass multivolume-9587pzgxm 11/25/22 16:06:41.701 STEP: creating a claim 11/25/22 16:06:41.803 Nov 25 16:06:42.020: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpath42r8g] to have phase Bound Nov 25 16:06:42.095: INFO: PersistentVolumeClaim csi-hostpath42r8g found but phase is Pending instead of Bound. Nov 25 16:06:44.151: INFO: PersistentVolumeClaim csi-hostpath42r8g found but phase is Pending instead of Bound. Nov 25 16:06:46.244: INFO: PersistentVolumeClaim csi-hostpath42r8g found but phase is Pending instead of Bound. Nov 25 16:06:48.318: INFO: PersistentVolumeClaim csi-hostpath42r8g found and phase=Bound (6.298096652s) Nov 25 16:06:48.528: INFO: Creating resource for dynamic PV Nov 25 16:06:48.528: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(csi-hostpath) supported size:{ 1Mi} STEP: creating a StorageClass multivolume-95878nl8j 11/25/22 16:06:48.528 STEP: creating a claim 11/25/22 16:06:48.689 Nov 25 16:06:48.832: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpath9rcht] to have phase Bound Nov 25 16:06:49.006: INFO: PersistentVolumeClaim csi-hostpath9rcht found but phase is Pending instead of Bound. WARNING: pod log: csi-hostpathplugin-0/hostpath: Get "https://10.138.0.3:10250/containerLogs/multivolume-9587-360/csi-hostpathplugin-0/hostpath?follow=true": No agent available WARNING: pod log: csi-hostpathplugin-0/node-driver-registrar: Get "https://10.138.0.3:10250/containerLogs/multivolume-9587-360/csi-hostpathplugin-0/node-driver-registrar?follow=true": No agent available WARNING: pod log: csi-hostpathplugin-0/liveness-probe: Get "https://10.138.0.3:10250/containerLogs/multivolume-9587-360/csi-hostpathplugin-0/liveness-probe?follow=true": No agent available WARNING: pod log: csi-hostpathplugin-0/csi-attacher: Get "https://10.138.0.3:10250/containerLogs/multivolume-9587-360/csi-hostpathplugin-0/csi-attacher?follow=true": No agent available WARNING: pod log: csi-hostpathplugin-0/csi-provisioner: Get "https://10.138.0.3:10250/containerLogs/multivolume-9587-360/csi-hostpathplugin-0/csi-provisioner?follow=true": No agent available WARNING: pod log: csi-hostpathplugin-0/csi-resizer: Get "https://10.138.0.3:10250/containerLogs/multivolume-9587-360/csi-hostpathplugin-0/csi-resizer?follow=true": No agent available WARNING: pod log: csi-hostpathplugin-0/csi-snapshotter: Get "https://10.138.0.3:10250/containerLogs/multivolume-9587-360/csi-hostpathplugin-0/csi-snapshotter?follow=true": No agent available Nov 25 16:06:51.137: INFO: PersistentVolumeClaim csi-hostpath9rcht found but phase is Pending instead of Bound. Nov 25 16:06:53.263: INFO: PersistentVolumeClaim csi-hostpath9rcht found and phase=Bound (4.430528326s) STEP: Creating pod on {Name:bootstrap-e2e-minion-group-6gq3 Selector:map[] Affinity:nil} with multiple volumes 11/25/22 16:06:53.399 Nov 25 16:06:53.489: INFO: Waiting up to 5m0s for pod "pod-8c875920-4772-4c12-9d78-d10bd83f2815" in namespace "multivolume-9587" to be "running" Nov 25 16:06:53.568: INFO: Pod "pod-8c875920-4772-4c12-9d78-d10bd83f2815": Phase="Pending", Reason="", readiness=false. Elapsed: 79.519121ms WARNING: pod log: csi-hostpathplugin-0/hostpath: Get "https://10.138.0.3:10250/containerLogs/multivolume-9587-360/csi-hostpathplugin-0/hostpath?follow=true": No agent available WARNING: pod log: csi-hostpathplugin-0/node-driver-registrar: Get "https://10.138.0.3:10250/containerLogs/multivolume-9587-360/csi-hostpathplugin-0/node-driver-registrar?follow=true": No agent available WARNING: pod log: csi-hostpathplugin-0/liveness-probe: Get "https://10.138.0.3:10250/containerLogs/multivolume-9587-360/csi-hostpathplugin-0/liveness-probe?follow=true": No agent available WARNING: pod log: csi-hostpathplugin-0/csi-attacher: Get "https://10.138.0.3:10250/containerLogs/multivolume-9587-360/csi-hostpathplugin-0/csi-attacher?follow=true": No agent available WARNING: pod log: csi-hostpathplugin-0/csi-provisioner: Get "https://10.138.0.3:10250/containerLogs/multivolume-9587-360/csi-hostpathplugin-0/csi-provisioner?follow=true": No agent available WARNING: pod log: csi-hostpathplugin-0/csi-resizer: Get "https://10.138.0.3:10250/containerLogs/multivolume-9587-360/csi-hostpathplugin-0/csi-resizer?follow=true": No agent available WARNING: pod log: csi-hostpathplugin-0/csi-snapshotter: Get "https://10.138.0.3:10250/containerLogs/multivolume-9587-360/csi-hostpathplugin-0/csi-snapshotter?follow=true": No agent available Nov 25 16:06:55.638: INFO: Pod "pod-8c875920-4772-4c12-9d78-d10bd83f2815": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14896981s Nov 25 16:06:57.622: INFO: Pod "pod-8c875920-4772-4c12-9d78-d10bd83f2815": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133338256s Nov 25 16:06:59.637: INFO: Pod "pod-8c875920-4772-4c12-9d78-d10bd83f2815": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147878055s Nov 25 16:07:01.670: INFO: Pod "pod-8c875920-4772-4c12-9d78-d10bd83f2815": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181569633s Nov 25 16:07:03.634: INFO: Pod "pod-8c875920-4772-4c12-9d78-d10bd83f2815": Phase="Pending", Reason="", readiness=false. Elapsed: 10.145118793s Nov 25 16:07:05.640: INFO: Pod "pod-8c875920-4772-4c12-9d78-d10bd83f2815": Phase="Running", Reason="", readiness=true. Elapsed: 12.151084113s Nov 25 16:07:05.640: INFO: Pod "pod-8c875920-4772-4c12-9d78-d10bd83f2815" satisfied condition "running" STEP: Checking if the volume1 exists as expected volume mode (Block) 11/25/22 16:07:05.723 Nov 25 16:07:05.723: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /mnt/volume1] Namespace:multivolume-9587 PodName:pod-8c875920-4772-4c12-9d78-d10bd83f2815 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 25 16:07:05.723: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 16:07:05.725: INFO: ExecWithOptions: Clientset creation Nov 25 16:07:05.725: INFO: ExecWithOptions: execute(POST https://35.197.125.133/api/v1/namespaces/multivolume-9587/pods/pod-8c875920-4772-4c12-9d78-d10bd83f2815/exec?command=%2Fbin%2Fsh&command=-c&command=test+-b+%2Fmnt%2Fvolume1&container=write-pod&container=write-pod&stderr=true&stdout=true) Nov 25 16:07:05.944: INFO: Unexpected error: "test -b /mnt/volume1" should succeed, but failed with error message "error dialing backend: No agent available" stdout: stderr: : <*errors.StatusError | 0xc003789360>: { ErrStatus: code: 500 message: 'error dialing backend: No agent available' metadata: {} status: Failure, } Nov 25 16:07:05.944: FAIL: "test -b /mnt/volume1" should succeed, but failed with error message "error dialing backend: No agent available" stdout: stderr: : error dialing backend: No agent available Full Stack Trace k8s.io/kubernetes/test/e2e/framework/volume.VerifyExecInPodSucceed(0x75cb450?, 0xa?, {0xc00164ded8, 0x14}) test/e2e/framework/volume/fixtures.go:668 +0x392 k8s.io/kubernetes/test/e2e/framework/volume.CheckVolumeModeOfPath(0xc003202100?, 0x3e?, {0xc0030d3890?, 0x0?}, {0xc0048f8660, 0xc}) test/e2e/framework/volume/fixtures.go:636 +0xa9 k8s.io/kubernetes/test/e2e/storage/testsuites.testAccessMultipleVolumes(0xc0010d33b0, {0x801de88, 0xc002f8a4e0}, {0xc002f08c80, 0x10}, {{0xc0010ea520?, 0xc0020ffcd8?}, 0x0?, 0x0?}, {0xc0010a73f0, ...}, ...) test/e2e/storage/testsuites/multivolume.go:504 +0x630 k8s.io/kubernetes/test/e2e/storage/testsuites.TestAccessMultipleVolumesAcrossPodRecreation(0x75c5269?, {0x801de88, 0xc002f8a4e0}, {0xc002f08c80, 0x10}, {{0xc0010ea520, 0x1f}, 0x0, 0x0}, {0xc0010a73f0, ...}, ...) test/e2e/storage/testsuites/multivolume.go:533 +0x1b1 k8s.io/kubernetes/test/e2e/storage/testsuites.(*multiVolumeTestSuite).DefineTests.func5() test/e2e/storage/testsuites/multivolume.go:236 +0x485 Nov 25 16:07:05.945: INFO: Deleting pod "pod-8c875920-4772-4c12-9d78-d10bd83f2815" in namespace "multivolume-9587" Nov 25 16:07:06.038: INFO: Wait up to 5m0s for pod "pod-8c875920-4772-4c12-9d78-d10bd83f2815" to be fully deleted STEP: Deleting pvc 11/25/22 16:07:10.176 Nov 25 16:07:10.176: INFO: Deleting PersistentVolumeClaim "csi-hostpath42r8g" Nov 25 16:07:10.275: INFO: Waiting up to 5m0s for PersistentVolume pvc-21e47e0b-cf73-4f78-8570-0fbeaa028bd9 to get deleted Nov 25 16:07:10.365: INFO: PersistentVolume pvc-21e47e0b-cf73-4f78-8570-0fbeaa028bd9 found and